00:00:00.001 Started by upstream project "autotest-per-patch" build number 126233 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.612 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.622 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.635 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.635 > git config core.sparsecheckout # timeout=10 00:00:06.646 > git read-tree -mu HEAD # timeout=10 00:00:06.661 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.681 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.682 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.837 [Pipeline] Start of Pipeline 00:00:06.851 [Pipeline] library 00:00:06.853 Loading library shm_lib@master 00:00:06.853 Library shm_lib@master is cached. Copying from home. 00:00:06.874 [Pipeline] node 00:00:06.882 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.888 [Pipeline] { 00:00:06.899 [Pipeline] catchError 00:00:06.900 [Pipeline] { 00:00:06.911 [Pipeline] wrap 00:00:06.921 [Pipeline] { 00:00:06.929 [Pipeline] stage 00:00:06.931 [Pipeline] { (Prologue) 00:00:07.110 [Pipeline] sh 00:00:07.392 + logger -p user.info -t JENKINS-CI 00:00:07.412 [Pipeline] echo 00:00:07.415 Node: CYP9 00:00:07.423 [Pipeline] sh 00:00:07.725 [Pipeline] setCustomBuildProperty 00:00:07.738 [Pipeline] echo 00:00:07.740 Cleanup processes 00:00:07.746 [Pipeline] sh 00:00:08.032 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.032 1241087 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.046 [Pipeline] sh 00:00:08.337 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.337 ++ grep -v 'sudo pgrep' 00:00:08.337 ++ awk '{print $1}' 00:00:08.337 + sudo kill -9 00:00:08.337 + true 00:00:08.353 [Pipeline] cleanWs 00:00:08.364 [WS-CLEANUP] Deleting project workspace... 00:00:08.364 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.371 [WS-CLEANUP] done 00:00:08.376 [Pipeline] setCustomBuildProperty 00:00:08.393 [Pipeline] sh 00:00:08.682 + sudo git config --global --replace-all safe.directory '*' 00:00:08.769 [Pipeline] httpRequest 00:00:08.799 [Pipeline] echo 00:00:08.801 Sorcerer 10.211.164.101 is alive 00:00:08.808 [Pipeline] httpRequest 00:00:08.813 HttpMethod: GET 00:00:08.813 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.814 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.827 Response Code: HTTP/1.1 200 OK 00:00:08.828 Success: Status code 200 is in the accepted range: 200,404 00:00:08.828 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.825 [Pipeline] sh 00:00:12.112 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:12.130 [Pipeline] httpRequest 00:00:12.167 [Pipeline] echo 00:00:12.170 Sorcerer 10.211.164.101 is alive 00:00:12.181 [Pipeline] httpRequest 00:00:12.186 HttpMethod: GET 00:00:12.186 URL: http://10.211.164.101/packages/spdk_06cc9fb0c444f26987e4cef1ef6ad9ae64de47db.tar.gz 00:00:12.187 Sending request to url: http://10.211.164.101/packages/spdk_06cc9fb0c444f26987e4cef1ef6ad9ae64de47db.tar.gz 00:00:12.212 Response Code: HTTP/1.1 200 OK 00:00:12.212 Success: Status code 200 is in the accepted range: 200,404 00:00:12.213 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_06cc9fb0c444f26987e4cef1ef6ad9ae64de47db.tar.gz 00:00:58.193 [Pipeline] sh 00:00:58.480 + tar --no-same-owner -xf spdk_06cc9fb0c444f26987e4cef1ef6ad9ae64de47db.tar.gz 00:01:01.791 [Pipeline] sh 00:01:02.077 + git -C spdk log --oneline -n5 00:01:02.077 06cc9fb0c build: fix unit test builds that directly use env_dpdk 00:01:02.077 406b3b1b5 util: allow NULL saddr/caddr for spdk_net_getaddr 00:01:02.077 1053f1b13 util: don't allow users to pass caddr/cport for listen sockets 00:01:02.077 0663932f5 util: add spdk_net_getaddr 00:01:02.077 9da437b46 util: move module/sock/sock_kernel.h contents to net.c 00:01:02.090 [Pipeline] } 00:01:02.107 [Pipeline] // stage 00:01:02.117 [Pipeline] stage 00:01:02.119 [Pipeline] { (Prepare) 00:01:02.139 [Pipeline] writeFile 00:01:02.155 [Pipeline] sh 00:01:02.488 + logger -p user.info -t JENKINS-CI 00:01:02.502 [Pipeline] sh 00:01:02.783 + logger -p user.info -t JENKINS-CI 00:01:02.797 [Pipeline] sh 00:01:03.083 + cat autorun-spdk.conf 00:01:03.083 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.083 SPDK_TEST_NVMF=1 00:01:03.083 SPDK_TEST_NVME_CLI=1 00:01:03.083 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.083 SPDK_TEST_NVMF_NICS=e810 00:01:03.083 SPDK_TEST_VFIOUSER=1 00:01:03.083 SPDK_RUN_UBSAN=1 00:01:03.083 NET_TYPE=phy 00:01:03.091 RUN_NIGHTLY=0 00:01:03.096 [Pipeline] readFile 00:01:03.127 [Pipeline] withEnv 00:01:03.130 [Pipeline] { 00:01:03.145 [Pipeline] sh 00:01:03.429 + set -ex 00:01:03.429 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:03.429 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.429 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.429 ++ SPDK_TEST_NVMF=1 00:01:03.429 ++ SPDK_TEST_NVME_CLI=1 00:01:03.429 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.429 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.429 ++ SPDK_TEST_VFIOUSER=1 00:01:03.429 ++ SPDK_RUN_UBSAN=1 00:01:03.429 ++ NET_TYPE=phy 00:01:03.429 ++ RUN_NIGHTLY=0 00:01:03.429 + case $SPDK_TEST_NVMF_NICS in 00:01:03.429 + DRIVERS=ice 00:01:03.429 + [[ tcp == \r\d\m\a ]] 00:01:03.429 + [[ -n ice ]] 00:01:03.429 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:03.429 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:03.429 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:03.429 rmmod: ERROR: Module irdma is not currently loaded 00:01:03.429 rmmod: ERROR: Module i40iw is not currently loaded 00:01:03.429 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:03.429 + true 00:01:03.429 + for D in $DRIVERS 00:01:03.429 + sudo modprobe ice 00:01:03.429 + exit 0 00:01:03.438 [Pipeline] } 00:01:03.452 [Pipeline] // withEnv 00:01:03.457 [Pipeline] } 00:01:03.469 [Pipeline] // stage 00:01:03.478 [Pipeline] catchError 00:01:03.480 [Pipeline] { 00:01:03.496 [Pipeline] timeout 00:01:03.496 Timeout set to expire in 50 min 00:01:03.498 [Pipeline] { 00:01:03.513 [Pipeline] stage 00:01:03.515 [Pipeline] { (Tests) 00:01:03.532 [Pipeline] sh 00:01:03.819 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.819 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.819 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.819 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:03.819 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.819 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.819 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:03.819 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.819 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.819 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.819 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:03.819 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.819 + source /etc/os-release 00:01:03.819 ++ NAME='Fedora Linux' 00:01:03.819 ++ VERSION='38 (Cloud Edition)' 00:01:03.819 ++ ID=fedora 00:01:03.819 ++ VERSION_ID=38 00:01:03.819 ++ VERSION_CODENAME= 00:01:03.819 ++ PLATFORM_ID=platform:f38 00:01:03.819 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:03.819 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:03.819 ++ LOGO=fedora-logo-icon 00:01:03.819 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:03.819 ++ HOME_URL=https://fedoraproject.org/ 00:01:03.819 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:03.819 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:03.819 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:03.819 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:03.819 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:03.819 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:03.819 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:03.819 ++ SUPPORT_END=2024-05-14 00:01:03.819 ++ VARIANT='Cloud Edition' 00:01:03.819 ++ VARIANT_ID=cloud 00:01:03.819 + uname -a 00:01:03.819 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:03.819 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.387 Hugepages 00:01:06.387 node hugesize free / total 00:01:06.387 node0 1048576kB 0 / 0 00:01:06.387 node0 2048kB 0 / 0 00:01:06.387 node1 1048576kB 0 / 0 00:01:06.387 node1 2048kB 0 / 0 00:01:06.387 00:01:06.387 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.387 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:06.387 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:06.387 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:06.387 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:06.387 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:06.387 + rm -f /tmp/spdk-ld-path 00:01:06.387 + source autorun-spdk.conf 00:01:06.387 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.387 ++ SPDK_TEST_NVMF=1 00:01:06.387 ++ SPDK_TEST_NVME_CLI=1 00:01:06.387 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.387 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.387 ++ SPDK_TEST_VFIOUSER=1 00:01:06.387 ++ SPDK_RUN_UBSAN=1 00:01:06.387 ++ NET_TYPE=phy 00:01:06.387 ++ RUN_NIGHTLY=0 00:01:06.387 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.387 + [[ -n '' ]] 00:01:06.387 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.387 + for M in /var/spdk/build-*-manifest.txt 00:01:06.387 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.387 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.387 + for M in /var/spdk/build-*-manifest.txt 00:01:06.387 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.387 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.387 ++ uname 00:01:06.387 + [[ Linux == \L\i\n\u\x ]] 00:01:06.387 + sudo dmesg -T 00:01:06.387 + sudo dmesg --clear 00:01:06.649 + dmesg_pid=1242058 00:01:06.649 + [[ Fedora Linux == FreeBSD ]] 00:01:06.649 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.649 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.649 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.649 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:06.649 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:06.649 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.649 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.649 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.649 + sudo dmesg -Tw 00:01:06.649 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.649 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.649 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.649 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.649 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.649 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.649 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.649 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.649 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.649 Test configuration: 00:01:06.649 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.649 SPDK_TEST_NVMF=1 00:01:06.649 SPDK_TEST_NVME_CLI=1 00:01:06.649 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.649 SPDK_TEST_NVMF_NICS=e810 00:01:06.649 SPDK_TEST_VFIOUSER=1 00:01:06.649 SPDK_RUN_UBSAN=1 00:01:06.649 NET_TYPE=phy 00:01:06.649 RUN_NIGHTLY=0 20:37:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.649 20:37:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.649 20:37:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.649 20:37:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.649 20:37:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.649 20:37:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.649 20:37:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.649 20:37:10 -- paths/export.sh@5 -- $ export PATH 00:01:06.649 20:37:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.649 20:37:10 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.649 20:37:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:06.649 20:37:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721068630.XXXXXX 00:01:06.649 20:37:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721068630.A1KO20 00:01:06.649 20:37:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:06.649 20:37:10 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:06.649 20:37:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.649 20:37:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.649 20:37:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.649 20:37:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:06.649 20:37:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:06.649 20:37:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.649 20:37:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.649 20:37:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:06.649 20:37:10 -- pm/common@17 -- $ local monitor 00:01:06.649 20:37:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.649 20:37:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.649 20:37:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.649 20:37:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.649 20:37:10 -- pm/common@21 -- $ date +%s 00:01:06.649 20:37:10 -- pm/common@25 -- $ sleep 1 00:01:06.649 20:37:10 -- pm/common@21 -- $ date +%s 00:01:06.650 20:37:10 -- pm/common@21 -- $ date +%s 00:01:06.650 20:37:10 -- pm/common@21 -- $ date +%s 00:01:06.650 20:37:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721068630 00:01:06.650 20:37:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721068630 00:01:06.650 20:37:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721068630 00:01:06.650 20:37:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721068630 00:01:06.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721068630_collect-vmstat.pm.log 00:01:06.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721068630_collect-cpu-load.pm.log 00:01:06.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721068630_collect-cpu-temp.pm.log 00:01:06.650 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721068630_collect-bmc-pm.bmc.pm.log 00:01:07.592 20:37:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:07.592 20:37:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.592 20:37:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.592 20:37:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.592 20:37:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.592 Mon Jul 15 06:37:11 PM UTC 2024 00:01:07.592 20:37:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.592 v24.09-pre-220-g06cc9fb0c 00:01:07.592 20:37:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.592 20:37:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.592 20:37:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.592 20:37:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:07.592 20:37:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:07.592 20:37:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.853 ************************************ 00:01:07.853 START TEST ubsan 00:01:07.853 ************************************ 00:01:07.853 20:37:11 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:07.853 using ubsan 00:01:07.853 00:01:07.853 real 0m0.000s 00:01:07.853 user 0m0.000s 00:01:07.853 sys 0m0.000s 00:01:07.853 20:37:11 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:07.853 20:37:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.853 ************************************ 00:01:07.853 END TEST ubsan 00:01:07.853 ************************************ 00:01:07.853 20:37:11 -- common/autotest_common.sh@1142 -- $ return 0 00:01:07.853 20:37:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.853 20:37:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.853 20:37:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.853 20:37:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:07.853 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.853 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.426 Using 'verbs' RDMA provider 00:01:24.274 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.557 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.557 Creating mk/config.mk...done. 00:01:36.557 Creating mk/cc.flags.mk...done. 00:01:36.557 Type 'make' to build. 00:01:36.557 20:37:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:36.557 20:37:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:36.557 20:37:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.557 20:37:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.557 ************************************ 00:01:36.557 START TEST make 00:01:36.557 ************************************ 00:01:36.557 20:37:39 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:36.557 make[1]: Nothing to be done for 'all'. 00:01:37.933 The Meson build system 00:01:37.933 Version: 1.3.1 00:01:37.933 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.933 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.933 Build type: native build 00:01:37.933 Project name: libvfio-user 00:01:37.933 Project version: 0.0.1 00:01:37.933 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.933 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.933 Host machine cpu family: x86_64 00:01:37.933 Host machine cpu: x86_64 00:01:37.933 Run-time dependency threads found: YES 00:01:37.933 Library dl found: YES 00:01:37.933 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.933 Run-time dependency json-c found: YES 0.17 00:01:37.933 Run-time dependency cmocka found: YES 1.1.7 00:01:37.933 Program pytest-3 found: NO 00:01:37.933 Program flake8 found: NO 00:01:37.933 Program misspell-fixer found: NO 00:01:37.933 Program restructuredtext-lint found: NO 00:01:37.933 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.933 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.933 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.933 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.933 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.933 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.933 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.933 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.933 Build targets in project: 8 00:01:37.933 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.933 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.933 00:01:37.933 libvfio-user 0.0.1 00:01:37.933 00:01:37.933 User defined options 00:01:37.933 buildtype : debug 00:01:37.933 default_library: shared 00:01:37.933 libdir : /usr/local/lib 00:01:37.933 00:01:37.933 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.933 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.933 [1/37] Compiling C object samples/null.p/null.c.o 00:01:37.933 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.933 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.933 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.933 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.933 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.933 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.933 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.933 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.933 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.933 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.933 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.192 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.192 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:38.192 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.192 [16/37] Compiling C object samples/server.p/server.c.o 00:01:38.192 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:38.192 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:38.192 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:38.192 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:38.192 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:38.192 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.192 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:38.192 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.192 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.192 [26/37] Compiling C object samples/client.p/client.c.o 00:01:38.192 [27/37] Linking target samples/client 00:01:38.192 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:38.192 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:38.192 [30/37] Linking target test/unit_tests 00:01:38.192 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:38.453 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:38.453 [33/37] Linking target samples/server 00:01:38.453 [34/37] Linking target samples/null 00:01:38.453 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:38.453 [36/37] Linking target samples/lspci 00:01:38.453 [37/37] Linking target samples/gpio-pci-idio-16 00:01:38.453 INFO: autodetecting backend as ninja 00:01:38.453 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.453 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.713 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.713 ninja: no work to do. 00:01:45.314 The Meson build system 00:01:45.314 Version: 1.3.1 00:01:45.314 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.314 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.314 Build type: native build 00:01:45.314 Program cat found: YES (/usr/bin/cat) 00:01:45.314 Project name: DPDK 00:01:45.314 Project version: 24.03.0 00:01:45.314 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.314 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.314 Host machine cpu family: x86_64 00:01:45.314 Host machine cpu: x86_64 00:01:45.314 Message: ## Building in Developer Mode ## 00:01:45.314 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.314 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.314 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.314 Program python3 found: YES (/usr/bin/python3) 00:01:45.314 Program cat found: YES (/usr/bin/cat) 00:01:45.314 Compiler for C supports arguments -march=native: YES 00:01:45.314 Checking for size of "void *" : 8 00:01:45.314 Checking for size of "void *" : 8 (cached) 00:01:45.314 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:45.314 Library m found: YES 00:01:45.314 Library numa found: YES 00:01:45.314 Has header "numaif.h" : YES 00:01:45.314 Library fdt found: NO 00:01:45.314 Library execinfo found: NO 00:01:45.314 Has header "execinfo.h" : YES 00:01:45.314 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.314 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.314 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.314 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.314 Run-time dependency openssl found: YES 3.0.9 00:01:45.314 Run-time dependency libpcap found: YES 1.10.4 00:01:45.314 Has header "pcap.h" with dependency libpcap: YES 00:01:45.314 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.314 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.314 Compiler for C supports arguments -Wformat: YES 00:01:45.314 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.314 Compiler for C supports arguments -Wformat-security: NO 00:01:45.314 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.314 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.314 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.314 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.314 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.314 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.314 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.314 Compiler for C supports arguments -Wundef: YES 00:01:45.314 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.314 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.314 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.314 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.314 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.314 Program objdump found: YES (/usr/bin/objdump) 00:01:45.314 Compiler for C supports arguments -mavx512f: YES 00:01:45.314 Checking if "AVX512 checking" compiles: YES 00:01:45.314 Fetching value of define "__SSE4_2__" : 1 00:01:45.314 Fetching value of define "__AES__" : 1 00:01:45.314 Fetching value of define "__AVX__" : 1 00:01:45.314 Fetching value of define "__AVX2__" : 1 00:01:45.314 Fetching value of define "__AVX512BW__" : 1 00:01:45.314 Fetching value of define "__AVX512CD__" : 1 00:01:45.314 Fetching value of define "__AVX512DQ__" : 1 00:01:45.314 Fetching value of define "__AVX512F__" : 1 00:01:45.314 Fetching value of define "__AVX512VL__" : 1 00:01:45.314 Fetching value of define "__PCLMUL__" : 1 00:01:45.314 Fetching value of define "__RDRND__" : 1 00:01:45.314 Fetching value of define "__RDSEED__" : 1 00:01:45.314 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:45.314 Fetching value of define "__znver1__" : (undefined) 00:01:45.314 Fetching value of define "__znver2__" : (undefined) 00:01:45.314 Fetching value of define "__znver3__" : (undefined) 00:01:45.314 Fetching value of define "__znver4__" : (undefined) 00:01:45.314 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.314 Message: lib/log: Defining dependency "log" 00:01:45.314 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.314 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.314 Checking for function "getentropy" : NO 00:01:45.314 Message: lib/eal: Defining dependency "eal" 00:01:45.314 Message: lib/ring: Defining dependency "ring" 00:01:45.314 Message: lib/rcu: Defining dependency "rcu" 00:01:45.314 Message: lib/mempool: Defining dependency "mempool" 00:01:45.314 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.314 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.314 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.314 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.314 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.314 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.314 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:45.314 Compiler for C supports arguments -mpclmul: YES 00:01:45.314 Compiler for C supports arguments -maes: YES 00:01:45.314 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.314 Compiler for C supports arguments -mavx512bw: YES 00:01:45.314 Compiler for C supports arguments -mavx512dq: YES 00:01:45.314 Compiler for C supports arguments -mavx512vl: YES 00:01:45.314 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.314 Compiler for C supports arguments -mavx2: YES 00:01:45.314 Compiler for C supports arguments -mavx: YES 00:01:45.314 Message: lib/net: Defining dependency "net" 00:01:45.314 Message: lib/meter: Defining dependency "meter" 00:01:45.314 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.314 Message: lib/pci: Defining dependency "pci" 00:01:45.314 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.314 Message: lib/hash: Defining dependency "hash" 00:01:45.314 Message: lib/timer: Defining dependency "timer" 00:01:45.314 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.314 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.314 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.314 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.314 Message: lib/power: Defining dependency "power" 00:01:45.314 Message: lib/reorder: Defining dependency "reorder" 00:01:45.314 Message: lib/security: Defining dependency "security" 00:01:45.314 Has header "linux/userfaultfd.h" : YES 00:01:45.314 Has header "linux/vduse.h" : YES 00:01:45.314 Message: lib/vhost: Defining dependency "vhost" 00:01:45.314 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.314 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.314 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.314 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.314 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.314 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.314 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.314 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.314 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.314 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.314 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.314 Configuring doxy-api-html.conf using configuration 00:01:45.314 Configuring doxy-api-man.conf using configuration 00:01:45.314 Program mandb found: YES (/usr/bin/mandb) 00:01:45.314 Program sphinx-build found: NO 00:01:45.314 Configuring rte_build_config.h using configuration 00:01:45.314 Message: 00:01:45.314 ================= 00:01:45.314 Applications Enabled 00:01:45.314 ================= 00:01:45.314 00:01:45.314 apps: 00:01:45.314 00:01:45.314 00:01:45.314 Message: 00:01:45.314 ================= 00:01:45.314 Libraries Enabled 00:01:45.314 ================= 00:01:45.314 00:01:45.314 libs: 00:01:45.314 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.314 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.314 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.314 00:01:45.314 Message: 00:01:45.314 =============== 00:01:45.314 Drivers Enabled 00:01:45.314 =============== 00:01:45.314 00:01:45.314 common: 00:01:45.314 00:01:45.314 bus: 00:01:45.314 pci, vdev, 00:01:45.314 mempool: 00:01:45.314 ring, 00:01:45.314 dma: 00:01:45.314 00:01:45.314 net: 00:01:45.314 00:01:45.314 crypto: 00:01:45.314 00:01:45.314 compress: 00:01:45.314 00:01:45.314 vdpa: 00:01:45.314 00:01:45.314 00:01:45.314 Message: 00:01:45.314 ================= 00:01:45.314 Content Skipped 00:01:45.314 ================= 00:01:45.314 00:01:45.314 apps: 00:01:45.314 dumpcap: explicitly disabled via build config 00:01:45.314 graph: explicitly disabled via build config 00:01:45.314 pdump: explicitly disabled via build config 00:01:45.314 proc-info: explicitly disabled via build config 00:01:45.314 test-acl: explicitly disabled via build config 00:01:45.314 test-bbdev: explicitly disabled via build config 00:01:45.314 test-cmdline: explicitly disabled via build config 00:01:45.314 test-compress-perf: explicitly disabled via build config 00:01:45.314 test-crypto-perf: explicitly disabled via build config 00:01:45.314 test-dma-perf: explicitly disabled via build config 00:01:45.314 test-eventdev: explicitly disabled via build config 00:01:45.314 test-fib: explicitly disabled via build config 00:01:45.314 test-flow-perf: explicitly disabled via build config 00:01:45.315 test-gpudev: explicitly disabled via build config 00:01:45.315 test-mldev: explicitly disabled via build config 00:01:45.315 test-pipeline: explicitly disabled via build config 00:01:45.315 test-pmd: explicitly disabled via build config 00:01:45.315 test-regex: explicitly disabled via build config 00:01:45.315 test-sad: explicitly disabled via build config 00:01:45.315 test-security-perf: explicitly disabled via build config 00:01:45.315 00:01:45.315 libs: 00:01:45.315 argparse: explicitly disabled via build config 00:01:45.315 metrics: explicitly disabled via build config 00:01:45.315 acl: explicitly disabled via build config 00:01:45.315 bbdev: explicitly disabled via build config 00:01:45.315 bitratestats: explicitly disabled via build config 00:01:45.315 bpf: explicitly disabled via build config 00:01:45.315 cfgfile: explicitly disabled via build config 00:01:45.315 distributor: explicitly disabled via build config 00:01:45.315 efd: explicitly disabled via build config 00:01:45.315 eventdev: explicitly disabled via build config 00:01:45.315 dispatcher: explicitly disabled via build config 00:01:45.315 gpudev: explicitly disabled via build config 00:01:45.315 gro: explicitly disabled via build config 00:01:45.315 gso: explicitly disabled via build config 00:01:45.315 ip_frag: explicitly disabled via build config 00:01:45.315 jobstats: explicitly disabled via build config 00:01:45.315 latencystats: explicitly disabled via build config 00:01:45.315 lpm: explicitly disabled via build config 00:01:45.315 member: explicitly disabled via build config 00:01:45.315 pcapng: explicitly disabled via build config 00:01:45.315 rawdev: explicitly disabled via build config 00:01:45.315 regexdev: explicitly disabled via build config 00:01:45.315 mldev: explicitly disabled via build config 00:01:45.315 rib: explicitly disabled via build config 00:01:45.315 sched: explicitly disabled via build config 00:01:45.315 stack: explicitly disabled via build config 00:01:45.315 ipsec: explicitly disabled via build config 00:01:45.315 pdcp: explicitly disabled via build config 00:01:45.315 fib: explicitly disabled via build config 00:01:45.315 port: explicitly disabled via build config 00:01:45.315 pdump: explicitly disabled via build config 00:01:45.315 table: explicitly disabled via build config 00:01:45.315 pipeline: explicitly disabled via build config 00:01:45.315 graph: explicitly disabled via build config 00:01:45.315 node: explicitly disabled via build config 00:01:45.315 00:01:45.315 drivers: 00:01:45.315 common/cpt: not in enabled drivers build config 00:01:45.315 common/dpaax: not in enabled drivers build config 00:01:45.315 common/iavf: not in enabled drivers build config 00:01:45.315 common/idpf: not in enabled drivers build config 00:01:45.315 common/ionic: not in enabled drivers build config 00:01:45.315 common/mvep: not in enabled drivers build config 00:01:45.315 common/octeontx: not in enabled drivers build config 00:01:45.315 bus/auxiliary: not in enabled drivers build config 00:01:45.315 bus/cdx: not in enabled drivers build config 00:01:45.315 bus/dpaa: not in enabled drivers build config 00:01:45.315 bus/fslmc: not in enabled drivers build config 00:01:45.315 bus/ifpga: not in enabled drivers build config 00:01:45.315 bus/platform: not in enabled drivers build config 00:01:45.315 bus/uacce: not in enabled drivers build config 00:01:45.315 bus/vmbus: not in enabled drivers build config 00:01:45.315 common/cnxk: not in enabled drivers build config 00:01:45.315 common/mlx5: not in enabled drivers build config 00:01:45.315 common/nfp: not in enabled drivers build config 00:01:45.315 common/nitrox: not in enabled drivers build config 00:01:45.315 common/qat: not in enabled drivers build config 00:01:45.315 common/sfc_efx: not in enabled drivers build config 00:01:45.315 mempool/bucket: not in enabled drivers build config 00:01:45.315 mempool/cnxk: not in enabled drivers build config 00:01:45.315 mempool/dpaa: not in enabled drivers build config 00:01:45.315 mempool/dpaa2: not in enabled drivers build config 00:01:45.315 mempool/octeontx: not in enabled drivers build config 00:01:45.315 mempool/stack: not in enabled drivers build config 00:01:45.315 dma/cnxk: not in enabled drivers build config 00:01:45.315 dma/dpaa: not in enabled drivers build config 00:01:45.315 dma/dpaa2: not in enabled drivers build config 00:01:45.315 dma/hisilicon: not in enabled drivers build config 00:01:45.315 dma/idxd: not in enabled drivers build config 00:01:45.315 dma/ioat: not in enabled drivers build config 00:01:45.315 dma/skeleton: not in enabled drivers build config 00:01:45.315 net/af_packet: not in enabled drivers build config 00:01:45.315 net/af_xdp: not in enabled drivers build config 00:01:45.315 net/ark: not in enabled drivers build config 00:01:45.315 net/atlantic: not in enabled drivers build config 00:01:45.315 net/avp: not in enabled drivers build config 00:01:45.315 net/axgbe: not in enabled drivers build config 00:01:45.315 net/bnx2x: not in enabled drivers build config 00:01:45.315 net/bnxt: not in enabled drivers build config 00:01:45.315 net/bonding: not in enabled drivers build config 00:01:45.315 net/cnxk: not in enabled drivers build config 00:01:45.315 net/cpfl: not in enabled drivers build config 00:01:45.315 net/cxgbe: not in enabled drivers build config 00:01:45.315 net/dpaa: not in enabled drivers build config 00:01:45.315 net/dpaa2: not in enabled drivers build config 00:01:45.315 net/e1000: not in enabled drivers build config 00:01:45.315 net/ena: not in enabled drivers build config 00:01:45.315 net/enetc: not in enabled drivers build config 00:01:45.315 net/enetfec: not in enabled drivers build config 00:01:45.315 net/enic: not in enabled drivers build config 00:01:45.315 net/failsafe: not in enabled drivers build config 00:01:45.315 net/fm10k: not in enabled drivers build config 00:01:45.315 net/gve: not in enabled drivers build config 00:01:45.315 net/hinic: not in enabled drivers build config 00:01:45.315 net/hns3: not in enabled drivers build config 00:01:45.315 net/i40e: not in enabled drivers build config 00:01:45.315 net/iavf: not in enabled drivers build config 00:01:45.315 net/ice: not in enabled drivers build config 00:01:45.315 net/idpf: not in enabled drivers build config 00:01:45.315 net/igc: not in enabled drivers build config 00:01:45.315 net/ionic: not in enabled drivers build config 00:01:45.315 net/ipn3ke: not in enabled drivers build config 00:01:45.315 net/ixgbe: not in enabled drivers build config 00:01:45.315 net/mana: not in enabled drivers build config 00:01:45.315 net/memif: not in enabled drivers build config 00:01:45.315 net/mlx4: not in enabled drivers build config 00:01:45.315 net/mlx5: not in enabled drivers build config 00:01:45.315 net/mvneta: not in enabled drivers build config 00:01:45.315 net/mvpp2: not in enabled drivers build config 00:01:45.315 net/netvsc: not in enabled drivers build config 00:01:45.315 net/nfb: not in enabled drivers build config 00:01:45.315 net/nfp: not in enabled drivers build config 00:01:45.315 net/ngbe: not in enabled drivers build config 00:01:45.315 net/null: not in enabled drivers build config 00:01:45.315 net/octeontx: not in enabled drivers build config 00:01:45.315 net/octeon_ep: not in enabled drivers build config 00:01:45.315 net/pcap: not in enabled drivers build config 00:01:45.315 net/pfe: not in enabled drivers build config 00:01:45.315 net/qede: not in enabled drivers build config 00:01:45.315 net/ring: not in enabled drivers build config 00:01:45.315 net/sfc: not in enabled drivers build config 00:01:45.315 net/softnic: not in enabled drivers build config 00:01:45.315 net/tap: not in enabled drivers build config 00:01:45.315 net/thunderx: not in enabled drivers build config 00:01:45.315 net/txgbe: not in enabled drivers build config 00:01:45.315 net/vdev_netvsc: not in enabled drivers build config 00:01:45.315 net/vhost: not in enabled drivers build config 00:01:45.315 net/virtio: not in enabled drivers build config 00:01:45.315 net/vmxnet3: not in enabled drivers build config 00:01:45.315 raw/*: missing internal dependency, "rawdev" 00:01:45.315 crypto/armv8: not in enabled drivers build config 00:01:45.315 crypto/bcmfs: not in enabled drivers build config 00:01:45.315 crypto/caam_jr: not in enabled drivers build config 00:01:45.315 crypto/ccp: not in enabled drivers build config 00:01:45.315 crypto/cnxk: not in enabled drivers build config 00:01:45.315 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.315 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.315 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.315 crypto/mlx5: not in enabled drivers build config 00:01:45.315 crypto/mvsam: not in enabled drivers build config 00:01:45.315 crypto/nitrox: not in enabled drivers build config 00:01:45.315 crypto/null: not in enabled drivers build config 00:01:45.315 crypto/octeontx: not in enabled drivers build config 00:01:45.315 crypto/openssl: not in enabled drivers build config 00:01:45.315 crypto/scheduler: not in enabled drivers build config 00:01:45.315 crypto/uadk: not in enabled drivers build config 00:01:45.315 crypto/virtio: not in enabled drivers build config 00:01:45.315 compress/isal: not in enabled drivers build config 00:01:45.315 compress/mlx5: not in enabled drivers build config 00:01:45.315 compress/nitrox: not in enabled drivers build config 00:01:45.315 compress/octeontx: not in enabled drivers build config 00:01:45.315 compress/zlib: not in enabled drivers build config 00:01:45.315 regex/*: missing internal dependency, "regexdev" 00:01:45.315 ml/*: missing internal dependency, "mldev" 00:01:45.315 vdpa/ifc: not in enabled drivers build config 00:01:45.315 vdpa/mlx5: not in enabled drivers build config 00:01:45.315 vdpa/nfp: not in enabled drivers build config 00:01:45.315 vdpa/sfc: not in enabled drivers build config 00:01:45.315 event/*: missing internal dependency, "eventdev" 00:01:45.315 baseband/*: missing internal dependency, "bbdev" 00:01:45.315 gpu/*: missing internal dependency, "gpudev" 00:01:45.315 00:01:45.315 00:01:45.315 Build targets in project: 84 00:01:45.315 00:01:45.315 DPDK 24.03.0 00:01:45.315 00:01:45.315 User defined options 00:01:45.315 buildtype : debug 00:01:45.315 default_library : shared 00:01:45.315 libdir : lib 00:01:45.315 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.315 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:45.315 c_link_args : 00:01:45.315 cpu_instruction_set: native 00:01:45.315 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:45.315 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:45.315 enable_docs : false 00:01:45.315 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.315 enable_kmods : false 00:01:45.315 max_lcores : 128 00:01:45.315 tests : false 00:01:45.315 00:01:45.315 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.315 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.316 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.316 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.316 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.316 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.316 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.316 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.316 [7/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.577 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.577 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.577 [10/267] Linking static target lib/librte_kvargs.a 00:01:45.577 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.577 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.578 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.578 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.578 [15/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.578 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.578 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.578 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.578 [19/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.578 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.578 [21/267] Linking static target lib/librte_log.a 00:01:45.578 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.578 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.578 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.578 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.578 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.578 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.578 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.578 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.578 [30/267] Linking static target lib/librte_pci.a 00:01:45.578 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.578 [32/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.578 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.838 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.838 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.838 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.838 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.838 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.838 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.838 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.838 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.838 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.838 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.838 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.838 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.838 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.838 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.838 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.838 [49/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.838 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.838 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.838 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.838 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.838 [54/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.838 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.838 [56/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.838 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.097 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.097 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.097 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.097 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.097 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.097 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.097 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.097 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.097 [66/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.097 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.097 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.097 [69/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.097 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.097 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.097 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.097 [73/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.097 [74/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.097 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.097 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.097 [77/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.097 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.097 [79/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:46.097 [80/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.097 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.097 [82/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.097 [83/267] Linking static target lib/librte_rcu.a 00:01:46.097 [84/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.097 [85/267] Linking static target lib/librte_meter.a 00:01:46.097 [86/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.097 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.097 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.097 [89/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.097 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.097 [91/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.097 [92/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.097 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.097 [94/267] Linking static target lib/librte_ring.a 00:01:46.097 [95/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.097 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.097 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.097 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.097 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.097 [100/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.097 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.097 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.097 [103/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.097 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.097 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.097 [106/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.097 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.097 [108/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.097 [109/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.097 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.097 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.097 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.097 [113/267] Linking static target lib/librte_telemetry.a 00:01:46.097 [114/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.097 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.097 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.097 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.097 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.097 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.097 [120/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.097 [121/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.097 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.097 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.097 [124/267] Linking static target lib/librte_cmdline.a 00:01:46.097 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.097 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.097 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.097 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.097 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.097 [130/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.097 [131/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.098 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.098 [133/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.098 [134/267] Linking static target lib/librte_power.a 00:01:46.098 [135/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.098 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.098 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.098 [138/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.098 [139/267] Linking static target lib/librte_timer.a 00:01:46.098 [140/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.098 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.098 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.098 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.098 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.098 [145/267] Linking static target lib/librte_reorder.a 00:01:46.098 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.098 [147/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.098 [148/267] Linking static target lib/librte_compressdev.a 00:01:46.098 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.098 [150/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.098 [151/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.098 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.098 [153/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.098 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.098 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.098 [156/267] Linking static target lib/librte_mempool.a 00:01:46.098 [157/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.098 [158/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.098 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.098 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.358 [161/267] Linking static target lib/librte_dmadev.a 00:01:46.358 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.358 [163/267] Linking static target lib/librte_mbuf.a 00:01:46.358 [164/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.358 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.358 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.358 [167/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.358 [168/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.358 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.358 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.358 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.358 [172/267] Linking static target lib/librte_net.a 00:01:46.358 [173/267] Linking static target lib/librte_security.a 00:01:46.358 [174/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.358 [175/267] Linking target lib/librte_log.so.24.1 00:01:46.358 [176/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.358 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.358 [178/267] Linking static target lib/librte_eal.a 00:01:46.358 [179/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.358 [180/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.358 [181/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.358 [182/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.358 [183/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.358 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.358 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.358 [186/267] Linking static target lib/librte_hash.a 00:01:46.358 [187/267] Linking static target drivers/librte_bus_vdev.a 00:01:46.358 [188/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.358 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.358 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.358 [191/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.358 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.617 [193/267] Linking target lib/librte_kvargs.so.24.1 00:01:46.617 [194/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.617 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.617 [196/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.617 [197/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.617 [198/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.617 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.617 [200/267] Linking static target lib/librte_cryptodev.a 00:01:46.617 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.617 [202/267] Linking static target drivers/librte_bus_pci.a 00:01:46.617 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.617 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.617 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:46.617 [206/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.617 [207/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.617 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.617 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.617 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.617 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.617 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:46.877 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.877 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.877 [215/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.877 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.877 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.877 [218/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.135 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.135 [220/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.135 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.135 [222/267] Linking static target lib/librte_ethdev.a 00:01:47.135 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.395 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.395 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.395 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.966 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.227 [228/267] Linking static target lib/librte_vhost.a 00:01:48.805 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.190 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.780 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.195 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.196 [233/267] Linking target lib/librte_eal.so.24.1 00:01:58.196 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.196 [235/267] Linking target lib/librte_ring.so.24.1 00:01:58.196 [236/267] Linking target lib/librte_timer.so.24.1 00:01:58.196 [237/267] Linking target lib/librte_meter.so.24.1 00:01:58.196 [238/267] Linking target lib/librte_pci.so.24.1 00:01:58.196 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:58.196 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.458 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.458 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.458 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.458 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.458 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.458 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:58.458 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:58.458 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.458 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.458 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.719 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.719 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:58.719 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.719 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:58.719 [255/267] Linking target lib/librte_net.so.24.1 00:01:58.719 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:58.719 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:58.981 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.981 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.981 [260/267] Linking target lib/librte_hash.so.24.1 00:01:58.981 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:58.981 [262/267] Linking target lib/librte_security.so.24.1 00:01:58.981 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:59.242 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.242 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.242 [266/267] Linking target lib/librte_power.so.24.1 00:01:59.242 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:59.242 INFO: autodetecting backend as ninja 00:01:59.242 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:00.188 CC lib/ut/ut.o 00:02:00.188 CC lib/log/log.o 00:02:00.188 CC lib/log/log_flags.o 00:02:00.188 CC lib/log/log_deprecated.o 00:02:00.188 CC lib/ut_mock/mock.o 00:02:00.450 LIB libspdk_ut.a 00:02:00.450 LIB libspdk_log.a 00:02:00.450 SO libspdk_ut.so.2.0 00:02:00.450 LIB libspdk_ut_mock.a 00:02:00.450 SO libspdk_log.so.7.0 00:02:00.450 SO libspdk_ut_mock.so.6.0 00:02:00.450 SYMLINK libspdk_ut.so 00:02:00.711 SYMLINK libspdk_ut_mock.so 00:02:00.711 SYMLINK libspdk_log.so 00:02:00.973 CXX lib/trace_parser/trace.o 00:02:00.973 CC lib/util/base64.o 00:02:00.973 CC lib/util/bit_array.o 00:02:00.973 CC lib/dma/dma.o 00:02:00.973 CC lib/util/cpuset.o 00:02:00.973 CC lib/util/crc16.o 00:02:00.973 CC lib/ioat/ioat.o 00:02:00.973 CC lib/util/crc32.o 00:02:00.973 CC lib/util/crc32c.o 00:02:00.973 CC lib/util/crc32_ieee.o 00:02:00.973 CC lib/util/crc64.o 00:02:00.973 CC lib/util/dif.o 00:02:00.973 CC lib/util/fd.o 00:02:00.973 CC lib/util/fd_group.o 00:02:00.973 CC lib/util/file.o 00:02:00.973 CC lib/util/hexlify.o 00:02:00.973 CC lib/util/math.o 00:02:00.973 CC lib/util/iov.o 00:02:00.973 CC lib/util/net.o 00:02:00.973 CC lib/util/pipe.o 00:02:00.973 CC lib/util/strerror_tls.o 00:02:00.973 CC lib/util/string.o 00:02:00.973 CC lib/util/uuid.o 00:02:00.973 CC lib/util/xor.o 00:02:00.973 CC lib/util/zipf.o 00:02:01.234 CC lib/vfio_user/host/vfio_user_pci.o 00:02:01.234 CC lib/vfio_user/host/vfio_user.o 00:02:01.234 LIB libspdk_dma.a 00:02:01.234 SO libspdk_dma.so.4.0 00:02:01.234 LIB libspdk_ioat.a 00:02:01.234 SO libspdk_ioat.so.7.0 00:02:01.234 SYMLINK libspdk_dma.so 00:02:01.496 SYMLINK libspdk_ioat.so 00:02:01.496 LIB libspdk_vfio_user.a 00:02:01.496 SO libspdk_vfio_user.so.5.0 00:02:01.496 LIB libspdk_util.a 00:02:01.496 SYMLINK libspdk_vfio_user.so 00:02:01.496 SO libspdk_util.so.9.1 00:02:01.757 SYMLINK libspdk_util.so 00:02:01.757 LIB libspdk_trace_parser.a 00:02:01.757 SO libspdk_trace_parser.so.5.0 00:02:02.019 SYMLINK libspdk_trace_parser.so 00:02:02.019 CC lib/idxd/idxd_user.o 00:02:02.019 CC lib/idxd/idxd.o 00:02:02.019 CC lib/conf/conf.o 00:02:02.019 CC lib/idxd/idxd_kernel.o 00:02:02.019 CC lib/rdma_utils/rdma_utils.o 00:02:02.019 CC lib/env_dpdk/env.o 00:02:02.019 CC lib/rdma_provider/common.o 00:02:02.019 CC lib/json/json_parse.o 00:02:02.019 CC lib/vmd/led.o 00:02:02.019 CC lib/vmd/vmd.o 00:02:02.019 CC lib/json/json_util.o 00:02:02.019 CC lib/env_dpdk/memory.o 00:02:02.019 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.019 CC lib/json/json_write.o 00:02:02.019 CC lib/env_dpdk/pci.o 00:02:02.019 CC lib/env_dpdk/init.o 00:02:02.019 CC lib/env_dpdk/threads.o 00:02:02.019 CC lib/env_dpdk/pci_ioat.o 00:02:02.019 CC lib/env_dpdk/pci_virtio.o 00:02:02.019 CC lib/env_dpdk/pci_vmd.o 00:02:02.019 CC lib/env_dpdk/pci_idxd.o 00:02:02.019 CC lib/env_dpdk/pci_event.o 00:02:02.019 CC lib/env_dpdk/sigbus_handler.o 00:02:02.019 CC lib/env_dpdk/pci_dpdk.o 00:02:02.019 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.019 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:02.280 LIB libspdk_rdma_provider.a 00:02:02.280 LIB libspdk_conf.a 00:02:02.280 SO libspdk_rdma_provider.so.6.0 00:02:02.280 LIB libspdk_rdma_utils.a 00:02:02.280 SO libspdk_conf.so.6.0 00:02:02.280 LIB libspdk_json.a 00:02:02.280 SYMLINK libspdk_rdma_provider.so 00:02:02.280 SO libspdk_rdma_utils.so.1.0 00:02:02.280 SYMLINK libspdk_conf.so 00:02:02.280 SO libspdk_json.so.6.0 00:02:02.542 SYMLINK libspdk_rdma_utils.so 00:02:02.542 SYMLINK libspdk_json.so 00:02:02.542 LIB libspdk_idxd.a 00:02:02.542 SO libspdk_idxd.so.12.0 00:02:02.542 LIB libspdk_vmd.a 00:02:02.542 SYMLINK libspdk_idxd.so 00:02:02.542 SO libspdk_vmd.so.6.0 00:02:02.804 SYMLINK libspdk_vmd.so 00:02:02.804 CC lib/jsonrpc/jsonrpc_server.o 00:02:02.804 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:02.804 CC lib/jsonrpc/jsonrpc_client.o 00:02:02.804 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.065 LIB libspdk_jsonrpc.a 00:02:03.065 SO libspdk_jsonrpc.so.6.0 00:02:03.326 SYMLINK libspdk_jsonrpc.so 00:02:03.326 LIB libspdk_env_dpdk.a 00:02:03.326 SO libspdk_env_dpdk.so.14.1 00:02:03.587 SYMLINK libspdk_env_dpdk.so 00:02:03.587 CC lib/rpc/rpc.o 00:02:03.847 LIB libspdk_rpc.a 00:02:03.847 SO libspdk_rpc.so.6.0 00:02:03.847 SYMLINK libspdk_rpc.so 00:02:04.109 CC lib/notify/notify.o 00:02:04.109 CC lib/keyring/keyring.o 00:02:04.109 CC lib/notify/notify_rpc.o 00:02:04.109 CC lib/keyring/keyring_rpc.o 00:02:04.370 CC lib/trace/trace.o 00:02:04.370 CC lib/trace/trace_flags.o 00:02:04.370 CC lib/trace/trace_rpc.o 00:02:04.370 LIB libspdk_notify.a 00:02:04.370 SO libspdk_notify.so.6.0 00:02:04.370 LIB libspdk_keyring.a 00:02:04.370 LIB libspdk_trace.a 00:02:04.632 SO libspdk_keyring.so.1.0 00:02:04.632 SYMLINK libspdk_notify.so 00:02:04.632 SO libspdk_trace.so.10.0 00:02:04.632 SYMLINK libspdk_keyring.so 00:02:04.632 SYMLINK libspdk_trace.so 00:02:04.893 CC lib/thread/thread.o 00:02:04.893 CC lib/thread/iobuf.o 00:02:04.893 CC lib/sock/sock.o 00:02:04.893 CC lib/sock/sock_rpc.o 00:02:05.466 LIB libspdk_sock.a 00:02:05.466 SO libspdk_sock.so.10.0 00:02:05.466 SYMLINK libspdk_sock.so 00:02:05.727 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.727 CC lib/nvme/nvme_ctrlr.o 00:02:05.727 CC lib/nvme/nvme_fabric.o 00:02:05.727 CC lib/nvme/nvme_ns_cmd.o 00:02:05.727 CC lib/nvme/nvme_ns.o 00:02:05.727 CC lib/nvme/nvme_pcie_common.o 00:02:05.727 CC lib/nvme/nvme_pcie.o 00:02:05.727 CC lib/nvme/nvme_qpair.o 00:02:05.727 CC lib/nvme/nvme.o 00:02:05.727 CC lib/nvme/nvme_quirks.o 00:02:05.727 CC lib/nvme/nvme_transport.o 00:02:05.727 CC lib/nvme/nvme_discovery.o 00:02:05.727 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.727 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.727 CC lib/nvme/nvme_tcp.o 00:02:05.727 CC lib/nvme/nvme_opal.o 00:02:05.727 CC lib/nvme/nvme_io_msg.o 00:02:05.727 CC lib/nvme/nvme_poll_group.o 00:02:05.727 CC lib/nvme/nvme_zns.o 00:02:05.727 CC lib/nvme/nvme_stubs.o 00:02:05.727 CC lib/nvme/nvme_auth.o 00:02:05.727 CC lib/nvme/nvme_cuse.o 00:02:05.727 CC lib/nvme/nvme_vfio_user.o 00:02:05.727 CC lib/nvme/nvme_rdma.o 00:02:06.298 LIB libspdk_thread.a 00:02:06.298 SO libspdk_thread.so.10.1 00:02:06.298 SYMLINK libspdk_thread.so 00:02:06.559 CC lib/blob/request.o 00:02:06.559 CC lib/blob/blobstore.o 00:02:06.559 CC lib/blob/zeroes.o 00:02:06.559 CC lib/blob/blob_bs_dev.o 00:02:06.559 CC lib/virtio/virtio_vhost_user.o 00:02:06.559 CC lib/virtio/virtio.o 00:02:06.559 CC lib/vfu_tgt/tgt_endpoint.o 00:02:06.559 CC lib/virtio/virtio_vfio_user.o 00:02:06.559 CC lib/init/json_config.o 00:02:06.559 CC lib/vfu_tgt/tgt_rpc.o 00:02:06.559 CC lib/init/subsystem.o 00:02:06.559 CC lib/init/subsystem_rpc.o 00:02:06.559 CC lib/virtio/virtio_pci.o 00:02:06.559 CC lib/init/rpc.o 00:02:06.559 CC lib/accel/accel.o 00:02:06.559 CC lib/accel/accel_rpc.o 00:02:06.559 CC lib/accel/accel_sw.o 00:02:06.819 LIB libspdk_init.a 00:02:06.819 SO libspdk_init.so.5.0 00:02:06.819 LIB libspdk_virtio.a 00:02:06.819 LIB libspdk_vfu_tgt.a 00:02:07.080 SYMLINK libspdk_init.so 00:02:07.080 SO libspdk_virtio.so.7.0 00:02:07.080 SO libspdk_vfu_tgt.so.3.0 00:02:07.080 SYMLINK libspdk_vfu_tgt.so 00:02:07.080 SYMLINK libspdk_virtio.so 00:02:07.341 CC lib/event/app.o 00:02:07.341 CC lib/event/reactor.o 00:02:07.341 CC lib/event/log_rpc.o 00:02:07.341 CC lib/event/app_rpc.o 00:02:07.341 CC lib/event/scheduler_static.o 00:02:07.602 LIB libspdk_accel.a 00:02:07.602 SO libspdk_accel.so.15.1 00:02:07.602 LIB libspdk_nvme.a 00:02:07.602 SYMLINK libspdk_accel.so 00:02:07.602 LIB libspdk_event.a 00:02:07.602 SO libspdk_nvme.so.13.1 00:02:07.863 SO libspdk_event.so.14.0 00:02:07.863 SYMLINK libspdk_event.so 00:02:07.863 CC lib/bdev/bdev.o 00:02:07.863 CC lib/bdev/bdev_rpc.o 00:02:07.863 CC lib/bdev/bdev_zone.o 00:02:07.863 CC lib/bdev/part.o 00:02:07.863 CC lib/bdev/scsi_nvme.o 00:02:08.124 SYMLINK libspdk_nvme.so 00:02:09.064 LIB libspdk_blob.a 00:02:09.064 SO libspdk_blob.so.11.0 00:02:09.325 SYMLINK libspdk_blob.so 00:02:09.586 CC lib/blobfs/blobfs.o 00:02:09.586 CC lib/blobfs/tree.o 00:02:09.586 CC lib/lvol/lvol.o 00:02:10.158 LIB libspdk_bdev.a 00:02:10.158 SO libspdk_bdev.so.15.1 00:02:10.158 LIB libspdk_blobfs.a 00:02:10.158 SYMLINK libspdk_bdev.so 00:02:10.419 SO libspdk_blobfs.so.10.0 00:02:10.419 LIB libspdk_lvol.a 00:02:10.419 SYMLINK libspdk_blobfs.so 00:02:10.419 SO libspdk_lvol.so.10.0 00:02:10.419 SYMLINK libspdk_lvol.so 00:02:10.679 CC lib/scsi/dev.o 00:02:10.679 CC lib/scsi/lun.o 00:02:10.679 CC lib/scsi/port.o 00:02:10.679 CC lib/scsi/scsi.o 00:02:10.679 CC lib/scsi/scsi_bdev.o 00:02:10.679 CC lib/nbd/nbd.o 00:02:10.679 CC lib/nvmf/ctrlr.o 00:02:10.679 CC lib/ublk/ublk.o 00:02:10.679 CC lib/scsi/scsi_pr.o 00:02:10.679 CC lib/nbd/nbd_rpc.o 00:02:10.679 CC lib/nvmf/ctrlr_discovery.o 00:02:10.679 CC lib/ublk/ublk_rpc.o 00:02:10.679 CC lib/scsi/scsi_rpc.o 00:02:10.679 CC lib/nvmf/ctrlr_bdev.o 00:02:10.679 CC lib/scsi/task.o 00:02:10.679 CC lib/nvmf/subsystem.o 00:02:10.679 CC lib/ftl/ftl_core.o 00:02:10.679 CC lib/nvmf/nvmf.o 00:02:10.679 CC lib/ftl/ftl_init.o 00:02:10.679 CC lib/nvmf/nvmf_rpc.o 00:02:10.679 CC lib/ftl/ftl_layout.o 00:02:10.679 CC lib/nvmf/transport.o 00:02:10.679 CC lib/ftl/ftl_debug.o 00:02:10.679 CC lib/nvmf/tcp.o 00:02:10.679 CC lib/ftl/ftl_io.o 00:02:10.679 CC lib/ftl/ftl_sb.o 00:02:10.679 CC lib/nvmf/stubs.o 00:02:10.679 CC lib/nvmf/vfio_user.o 00:02:10.679 CC lib/ftl/ftl_l2p.o 00:02:10.679 CC lib/nvmf/mdns_server.o 00:02:10.679 CC lib/ftl/ftl_l2p_flat.o 00:02:10.679 CC lib/nvmf/auth.o 00:02:10.679 CC lib/ftl/ftl_nv_cache.o 00:02:10.679 CC lib/nvmf/rdma.o 00:02:10.679 CC lib/ftl/ftl_band.o 00:02:10.679 CC lib/ftl/ftl_band_ops.o 00:02:10.679 CC lib/ftl/ftl_writer.o 00:02:10.679 CC lib/ftl/ftl_rq.o 00:02:10.679 CC lib/ftl/ftl_reloc.o 00:02:10.679 CC lib/ftl/ftl_l2p_cache.o 00:02:10.679 CC lib/ftl/ftl_p2l.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.679 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.679 CC lib/ftl/utils/ftl_conf.o 00:02:10.679 CC lib/ftl/utils/ftl_md.o 00:02:10.679 CC lib/ftl/utils/ftl_mempool.o 00:02:10.680 CC lib/ftl/utils/ftl_property.o 00:02:10.680 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.680 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.680 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.680 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.680 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.680 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.680 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.680 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.680 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.680 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.680 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.680 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.680 CC lib/ftl/base/ftl_base_dev.o 00:02:10.680 CC lib/ftl/ftl_trace.o 00:02:10.680 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.248 LIB libspdk_scsi.a 00:02:11.248 LIB libspdk_nbd.a 00:02:11.248 SO libspdk_scsi.so.9.0 00:02:11.248 SO libspdk_nbd.so.7.0 00:02:11.248 SYMLINK libspdk_nbd.so 00:02:11.248 SYMLINK libspdk_scsi.so 00:02:11.509 LIB libspdk_ublk.a 00:02:11.509 SO libspdk_ublk.so.3.0 00:02:11.509 SYMLINK libspdk_ublk.so 00:02:11.769 CC lib/vhost/vhost.o 00:02:11.769 CC lib/vhost/vhost_rpc.o 00:02:11.769 CC lib/vhost/vhost_scsi.o 00:02:11.769 CC lib/vhost/vhost_blk.o 00:02:11.769 CC lib/vhost/rte_vhost_user.o 00:02:11.769 CC lib/iscsi/conn.o 00:02:11.769 CC lib/iscsi/init_grp.o 00:02:11.769 CC lib/iscsi/iscsi.o 00:02:11.769 CC lib/iscsi/md5.o 00:02:11.769 CC lib/iscsi/param.o 00:02:11.769 CC lib/iscsi/portal_grp.o 00:02:11.769 CC lib/iscsi/tgt_node.o 00:02:11.769 CC lib/iscsi/iscsi_rpc.o 00:02:11.769 CC lib/iscsi/iscsi_subsystem.o 00:02:11.769 CC lib/iscsi/task.o 00:02:11.769 LIB libspdk_ftl.a 00:02:11.769 SO libspdk_ftl.so.9.0 00:02:12.050 SYMLINK libspdk_ftl.so 00:02:12.622 LIB libspdk_nvmf.a 00:02:12.622 SO libspdk_nvmf.so.19.0 00:02:12.622 LIB libspdk_vhost.a 00:02:12.622 SO libspdk_vhost.so.8.0 00:02:12.883 SYMLINK libspdk_nvmf.so 00:02:12.883 SYMLINK libspdk_vhost.so 00:02:12.883 LIB libspdk_iscsi.a 00:02:12.883 SO libspdk_iscsi.so.8.0 00:02:13.143 SYMLINK libspdk_iscsi.so 00:02:13.715 CC module/vfu_device/vfu_virtio.o 00:02:13.715 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.715 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.715 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.715 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.715 CC module/sock/posix/posix.o 00:02:13.715 LIB libspdk_env_dpdk_rpc.a 00:02:13.715 CC module/keyring/linux/keyring_rpc.o 00:02:13.715 CC module/keyring/linux/keyring.o 00:02:13.715 CC module/accel/iaa/accel_iaa.o 00:02:13.715 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.715 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.715 CC module/blob/bdev/blob_bdev.o 00:02:13.715 CC module/accel/ioat/accel_ioat.o 00:02:13.715 CC module/accel/error/accel_error.o 00:02:13.715 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.715 CC module/accel/error/accel_error_rpc.o 00:02:13.715 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.715 CC module/keyring/file/keyring.o 00:02:13.715 CC module/keyring/file/keyring_rpc.o 00:02:13.715 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.715 CC module/accel/dsa/accel_dsa.o 00:02:13.715 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.715 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.976 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.976 LIB libspdk_keyring_linux.a 00:02:13.976 LIB libspdk_scheduler_gscheduler.a 00:02:13.976 SO libspdk_keyring_linux.so.1.0 00:02:13.976 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.976 LIB libspdk_keyring_file.a 00:02:13.976 LIB libspdk_accel_error.a 00:02:13.976 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.976 LIB libspdk_accel_ioat.a 00:02:13.976 LIB libspdk_accel_iaa.a 00:02:13.976 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.976 LIB libspdk_scheduler_dynamic.a 00:02:13.976 SO libspdk_keyring_file.so.1.0 00:02:13.976 SO libspdk_accel_error.so.2.0 00:02:13.976 SYMLINK libspdk_keyring_linux.so 00:02:13.976 SO libspdk_accel_ioat.so.6.0 00:02:13.976 SO libspdk_accel_iaa.so.3.0 00:02:13.976 LIB libspdk_blob_bdev.a 00:02:13.976 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.976 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.976 LIB libspdk_accel_dsa.a 00:02:13.976 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.976 SYMLINK libspdk_keyring_file.so 00:02:13.976 SYMLINK libspdk_accel_error.so 00:02:13.976 SO libspdk_blob_bdev.so.11.0 00:02:13.976 SYMLINK libspdk_accel_ioat.so 00:02:13.976 SO libspdk_accel_dsa.so.5.0 00:02:14.237 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.237 SYMLINK libspdk_accel_iaa.so 00:02:14.237 SYMLINK libspdk_blob_bdev.so 00:02:14.237 LIB libspdk_vfu_device.a 00:02:14.237 SYMLINK libspdk_accel_dsa.so 00:02:14.237 SO libspdk_vfu_device.so.3.0 00:02:14.237 SYMLINK libspdk_vfu_device.so 00:02:14.498 LIB libspdk_sock_posix.a 00:02:14.498 SO libspdk_sock_posix.so.6.0 00:02:14.498 SYMLINK libspdk_sock_posix.so 00:02:14.757 CC module/bdev/gpt/gpt.o 00:02:14.757 CC module/bdev/null/bdev_null.o 00:02:14.757 CC module/bdev/null/bdev_null_rpc.o 00:02:14.757 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.757 CC module/bdev/delay/vbdev_delay.o 00:02:14.757 CC module/bdev/error/vbdev_error.o 00:02:14.757 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.757 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.757 CC module/bdev/malloc/bdev_malloc.o 00:02:14.757 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.757 CC module/bdev/nvme/bdev_nvme.o 00:02:14.757 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.757 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.757 CC module/bdev/split/vbdev_split.o 00:02:14.757 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.757 CC module/bdev/aio/bdev_aio.o 00:02:14.757 CC module/bdev/nvme/nvme_rpc.o 00:02:14.757 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.757 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.757 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.757 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.757 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.757 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.757 CC module/bdev/nvme/vbdev_opal.o 00:02:14.757 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.757 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.757 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.757 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.757 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.757 CC module/bdev/raid/bdev_raid.o 00:02:14.757 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.757 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.757 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.757 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.757 CC module/bdev/raid/raid0.o 00:02:14.757 CC module/bdev/raid/raid1.o 00:02:14.757 CC module/bdev/ftl/bdev_ftl.o 00:02:14.757 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.757 CC module/bdev/raid/concat.o 00:02:14.757 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.757 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.757 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.016 LIB libspdk_blobfs_bdev.a 00:02:15.016 LIB libspdk_bdev_split.a 00:02:15.016 LIB libspdk_bdev_null.a 00:02:15.016 SO libspdk_blobfs_bdev.so.6.0 00:02:15.016 LIB libspdk_bdev_error.a 00:02:15.016 LIB libspdk_bdev_gpt.a 00:02:15.016 SO libspdk_bdev_null.so.6.0 00:02:15.016 SO libspdk_bdev_split.so.6.0 00:02:15.016 SO libspdk_bdev_error.so.6.0 00:02:15.016 SO libspdk_bdev_gpt.so.6.0 00:02:15.016 SYMLINK libspdk_blobfs_bdev.so 00:02:15.016 LIB libspdk_bdev_passthru.a 00:02:15.016 LIB libspdk_bdev_ftl.a 00:02:15.016 SYMLINK libspdk_bdev_split.so 00:02:15.016 LIB libspdk_bdev_aio.a 00:02:15.016 SYMLINK libspdk_bdev_null.so 00:02:15.016 LIB libspdk_bdev_delay.a 00:02:15.016 LIB libspdk_bdev_zone_block.a 00:02:15.016 SO libspdk_bdev_passthru.so.6.0 00:02:15.016 LIB libspdk_bdev_malloc.a 00:02:15.016 SO libspdk_bdev_ftl.so.6.0 00:02:15.016 SYMLINK libspdk_bdev_gpt.so 00:02:15.016 SYMLINK libspdk_bdev_error.so 00:02:15.016 LIB libspdk_bdev_iscsi.a 00:02:15.016 SO libspdk_bdev_aio.so.6.0 00:02:15.016 SO libspdk_bdev_zone_block.so.6.0 00:02:15.016 SO libspdk_bdev_delay.so.6.0 00:02:15.277 SO libspdk_bdev_malloc.so.6.0 00:02:15.277 SYMLINK libspdk_bdev_ftl.so 00:02:15.277 SO libspdk_bdev_iscsi.so.6.0 00:02:15.277 SYMLINK libspdk_bdev_passthru.so 00:02:15.277 SYMLINK libspdk_bdev_aio.so 00:02:15.277 SYMLINK libspdk_bdev_delay.so 00:02:15.277 SYMLINK libspdk_bdev_zone_block.so 00:02:15.277 SYMLINK libspdk_bdev_malloc.so 00:02:15.277 LIB libspdk_bdev_lvol.a 00:02:15.277 LIB libspdk_bdev_virtio.a 00:02:15.277 SYMLINK libspdk_bdev_iscsi.so 00:02:15.277 SO libspdk_bdev_virtio.so.6.0 00:02:15.277 SO libspdk_bdev_lvol.so.6.0 00:02:15.277 SYMLINK libspdk_bdev_virtio.so 00:02:15.277 SYMLINK libspdk_bdev_lvol.so 00:02:15.538 LIB libspdk_bdev_raid.a 00:02:15.799 SO libspdk_bdev_raid.so.6.0 00:02:15.799 SYMLINK libspdk_bdev_raid.so 00:02:16.741 LIB libspdk_bdev_nvme.a 00:02:16.741 SO libspdk_bdev_nvme.so.7.0 00:02:16.741 SYMLINK libspdk_bdev_nvme.so 00:02:17.311 CC module/event/subsystems/keyring/keyring.o 00:02:17.311 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:17.311 CC module/event/subsystems/sock/sock.o 00:02:17.311 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.311 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.572 CC module/event/subsystems/vmd/vmd.o 00:02:17.572 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.572 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.572 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.572 LIB libspdk_event_keyring.a 00:02:17.572 LIB libspdk_event_vmd.a 00:02:17.572 LIB libspdk_event_vfu_tgt.a 00:02:17.572 LIB libspdk_event_iobuf.a 00:02:17.572 LIB libspdk_event_sock.a 00:02:17.572 LIB libspdk_event_vhost_blk.a 00:02:17.572 LIB libspdk_event_scheduler.a 00:02:17.572 SO libspdk_event_keyring.so.1.0 00:02:17.572 SO libspdk_event_vmd.so.6.0 00:02:17.572 SO libspdk_event_vfu_tgt.so.3.0 00:02:17.572 SO libspdk_event_iobuf.so.3.0 00:02:17.572 SO libspdk_event_sock.so.5.0 00:02:17.572 SO libspdk_event_scheduler.so.4.0 00:02:17.572 SO libspdk_event_vhost_blk.so.3.0 00:02:17.572 SYMLINK libspdk_event_keyring.so 00:02:17.832 SYMLINK libspdk_event_iobuf.so 00:02:17.832 SYMLINK libspdk_event_vmd.so 00:02:17.832 SYMLINK libspdk_event_vfu_tgt.so 00:02:17.832 SYMLINK libspdk_event_sock.so 00:02:17.832 SYMLINK libspdk_event_scheduler.so 00:02:17.832 SYMLINK libspdk_event_vhost_blk.so 00:02:18.093 CC module/event/subsystems/accel/accel.o 00:02:18.093 LIB libspdk_event_accel.a 00:02:18.354 SO libspdk_event_accel.so.6.0 00:02:18.354 SYMLINK libspdk_event_accel.so 00:02:18.615 CC module/event/subsystems/bdev/bdev.o 00:02:18.876 LIB libspdk_event_bdev.a 00:02:18.876 SO libspdk_event_bdev.so.6.0 00:02:18.876 SYMLINK libspdk_event_bdev.so 00:02:19.449 CC module/event/subsystems/ublk/ublk.o 00:02:19.449 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:19.449 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:19.449 CC module/event/subsystems/nbd/nbd.o 00:02:19.449 CC module/event/subsystems/scsi/scsi.o 00:02:19.449 LIB libspdk_event_ublk.a 00:02:19.449 LIB libspdk_event_nbd.a 00:02:19.449 LIB libspdk_event_scsi.a 00:02:19.449 SO libspdk_event_ublk.so.3.0 00:02:19.449 SO libspdk_event_nbd.so.6.0 00:02:19.449 SO libspdk_event_scsi.so.6.0 00:02:19.449 LIB libspdk_event_nvmf.a 00:02:19.449 SYMLINK libspdk_event_ublk.so 00:02:19.449 SO libspdk_event_nvmf.so.6.0 00:02:19.449 SYMLINK libspdk_event_nbd.so 00:02:19.450 SYMLINK libspdk_event_scsi.so 00:02:19.711 SYMLINK libspdk_event_nvmf.so 00:02:19.972 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.972 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.972 LIB libspdk_event_vhost_scsi.a 00:02:20.233 LIB libspdk_event_iscsi.a 00:02:20.233 SO libspdk_event_vhost_scsi.so.3.0 00:02:20.233 SO libspdk_event_iscsi.so.6.0 00:02:20.233 SYMLINK libspdk_event_vhost_scsi.so 00:02:20.233 SYMLINK libspdk_event_iscsi.so 00:02:20.494 SO libspdk.so.6.0 00:02:20.494 SYMLINK libspdk.so 00:02:20.754 TEST_HEADER include/spdk/accel.h 00:02:20.754 TEST_HEADER include/spdk/assert.h 00:02:20.754 TEST_HEADER include/spdk/accel_module.h 00:02:20.754 TEST_HEADER include/spdk/barrier.h 00:02:20.754 TEST_HEADER include/spdk/base64.h 00:02:20.754 TEST_HEADER include/spdk/bdev.h 00:02:20.754 CC test/rpc_client/rpc_client_test.o 00:02:20.754 CXX app/trace/trace.o 00:02:20.754 TEST_HEADER include/spdk/bdev_module.h 00:02:20.754 TEST_HEADER include/spdk/bit_array.h 00:02:20.754 TEST_HEADER include/spdk/bdev_zone.h 00:02:20.754 TEST_HEADER include/spdk/bit_pool.h 00:02:20.754 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:20.754 TEST_HEADER include/spdk/blob_bdev.h 00:02:20.755 CC app/trace_record/trace_record.o 00:02:20.755 TEST_HEADER include/spdk/blobfs.h 00:02:20.755 TEST_HEADER include/spdk/blob.h 00:02:20.755 TEST_HEADER include/spdk/conf.h 00:02:20.755 TEST_HEADER include/spdk/config.h 00:02:20.755 TEST_HEADER include/spdk/cpuset.h 00:02:20.755 TEST_HEADER include/spdk/crc16.h 00:02:20.755 TEST_HEADER include/spdk/crc32.h 00:02:20.755 TEST_HEADER include/spdk/crc64.h 00:02:20.755 TEST_HEADER include/spdk/dif.h 00:02:20.755 TEST_HEADER include/spdk/dma.h 00:02:20.755 CC app/spdk_top/spdk_top.o 00:02:20.755 TEST_HEADER include/spdk/endian.h 00:02:20.755 TEST_HEADER include/spdk/env.h 00:02:20.755 TEST_HEADER include/spdk/env_dpdk.h 00:02:20.755 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:20.755 TEST_HEADER include/spdk/event.h 00:02:20.755 CC app/spdk_lspci/spdk_lspci.o 00:02:20.755 TEST_HEADER include/spdk/fd_group.h 00:02:20.755 TEST_HEADER include/spdk/fd.h 00:02:20.755 TEST_HEADER include/spdk/file.h 00:02:20.755 CC app/spdk_nvme_identify/identify.o 00:02:20.755 TEST_HEADER include/spdk/ftl.h 00:02:20.755 CC app/spdk_nvme_perf/perf.o 00:02:20.755 CC app/spdk_nvme_discover/discovery_aer.o 00:02:20.755 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.755 TEST_HEADER include/spdk/histogram_data.h 00:02:20.755 TEST_HEADER include/spdk/hexlify.h 00:02:20.755 TEST_HEADER include/spdk/idxd.h 00:02:20.755 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.755 TEST_HEADER include/spdk/init.h 00:02:20.755 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.755 TEST_HEADER include/spdk/ioat.h 00:02:20.755 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.755 TEST_HEADER include/spdk/json.h 00:02:20.755 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.755 TEST_HEADER include/spdk/keyring.h 00:02:20.755 TEST_HEADER include/spdk/keyring_module.h 00:02:20.755 TEST_HEADER include/spdk/likely.h 00:02:20.755 TEST_HEADER include/spdk/log.h 00:02:20.755 TEST_HEADER include/spdk/lvol.h 00:02:20.755 TEST_HEADER include/spdk/memory.h 00:02:20.755 TEST_HEADER include/spdk/mmio.h 00:02:20.755 TEST_HEADER include/spdk/net.h 00:02:20.755 TEST_HEADER include/spdk/nbd.h 00:02:20.755 TEST_HEADER include/spdk/notify.h 00:02:20.755 TEST_HEADER include/spdk/nvme.h 00:02:20.755 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.755 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.755 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.755 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.755 CC app/spdk_dd/spdk_dd.o 00:02:20.755 CC app/nvmf_tgt/nvmf_main.o 00:02:20.755 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.755 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.755 CC app/iscsi_tgt/iscsi_tgt.o 00:02:20.755 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.755 TEST_HEADER include/spdk/nvmf.h 00:02:20.755 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.755 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.755 TEST_HEADER include/spdk/opal.h 00:02:20.755 TEST_HEADER include/spdk/opal_spec.h 00:02:20.755 TEST_HEADER include/spdk/pci_ids.h 00:02:20.755 TEST_HEADER include/spdk/pipe.h 00:02:20.755 TEST_HEADER include/spdk/queue.h 00:02:20.755 TEST_HEADER include/spdk/rpc.h 00:02:20.755 TEST_HEADER include/spdk/reduce.h 00:02:20.755 TEST_HEADER include/spdk/scsi.h 00:02:20.755 TEST_HEADER include/spdk/scheduler.h 00:02:20.755 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.755 TEST_HEADER include/spdk/string.h 00:02:20.755 TEST_HEADER include/spdk/stdinc.h 00:02:20.755 TEST_HEADER include/spdk/sock.h 00:02:20.755 TEST_HEADER include/spdk/thread.h 00:02:20.755 TEST_HEADER include/spdk/trace.h 00:02:20.755 TEST_HEADER include/spdk/trace_parser.h 00:02:20.755 TEST_HEADER include/spdk/tree.h 00:02:20.755 TEST_HEADER include/spdk/ublk.h 00:02:21.015 TEST_HEADER include/spdk/util.h 00:02:21.015 TEST_HEADER include/spdk/version.h 00:02:21.015 CC app/spdk_tgt/spdk_tgt.o 00:02:21.015 TEST_HEADER include/spdk/uuid.h 00:02:21.015 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:21.015 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:21.015 TEST_HEADER include/spdk/vmd.h 00:02:21.015 TEST_HEADER include/spdk/xor.h 00:02:21.015 TEST_HEADER include/spdk/vhost.h 00:02:21.015 TEST_HEADER include/spdk/zipf.h 00:02:21.015 CXX test/cpp_headers/accel_module.o 00:02:21.015 CXX test/cpp_headers/accel.o 00:02:21.015 CXX test/cpp_headers/assert.o 00:02:21.015 CXX test/cpp_headers/base64.o 00:02:21.015 CXX test/cpp_headers/bdev_zone.o 00:02:21.015 CXX test/cpp_headers/barrier.o 00:02:21.015 CXX test/cpp_headers/bdev_module.o 00:02:21.015 CXX test/cpp_headers/bit_pool.o 00:02:21.015 CXX test/cpp_headers/bit_array.o 00:02:21.015 CXX test/cpp_headers/bdev.o 00:02:21.015 CXX test/cpp_headers/blobfs_bdev.o 00:02:21.015 CXX test/cpp_headers/blob_bdev.o 00:02:21.015 CXX test/cpp_headers/blobfs.o 00:02:21.015 CXX test/cpp_headers/blob.o 00:02:21.015 CXX test/cpp_headers/config.o 00:02:21.015 CXX test/cpp_headers/cpuset.o 00:02:21.015 CXX test/cpp_headers/conf.o 00:02:21.015 CXX test/cpp_headers/crc32.o 00:02:21.015 CXX test/cpp_headers/crc16.o 00:02:21.015 CXX test/cpp_headers/dma.o 00:02:21.015 CXX test/cpp_headers/crc64.o 00:02:21.015 CXX test/cpp_headers/endian.o 00:02:21.015 CXX test/cpp_headers/dif.o 00:02:21.015 CXX test/cpp_headers/env_dpdk.o 00:02:21.015 CXX test/cpp_headers/event.o 00:02:21.015 CXX test/cpp_headers/fd_group.o 00:02:21.015 CXX test/cpp_headers/env.o 00:02:21.015 CXX test/cpp_headers/file.o 00:02:21.015 CXX test/cpp_headers/ftl.o 00:02:21.015 CXX test/cpp_headers/fd.o 00:02:21.015 CXX test/cpp_headers/gpt_spec.o 00:02:21.015 CXX test/cpp_headers/hexlify.o 00:02:21.015 CXX test/cpp_headers/histogram_data.o 00:02:21.015 CXX test/cpp_headers/ioat.o 00:02:21.015 CXX test/cpp_headers/idxd.o 00:02:21.015 CXX test/cpp_headers/idxd_spec.o 00:02:21.015 CXX test/cpp_headers/init.o 00:02:21.015 CXX test/cpp_headers/ioat_spec.o 00:02:21.015 CXX test/cpp_headers/jsonrpc.o 00:02:21.015 CXX test/cpp_headers/iscsi_spec.o 00:02:21.015 CXX test/cpp_headers/json.o 00:02:21.015 CXX test/cpp_headers/keyring.o 00:02:21.015 CXX test/cpp_headers/likely.o 00:02:21.015 CXX test/cpp_headers/keyring_module.o 00:02:21.015 CXX test/cpp_headers/log.o 00:02:21.015 CXX test/cpp_headers/lvol.o 00:02:21.015 CXX test/cpp_headers/memory.o 00:02:21.015 CXX test/cpp_headers/net.o 00:02:21.015 CXX test/cpp_headers/mmio.o 00:02:21.015 CXX test/cpp_headers/nbd.o 00:02:21.015 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:21.015 CXX test/cpp_headers/notify.o 00:02:21.015 CXX test/cpp_headers/nvme.o 00:02:21.015 CXX test/cpp_headers/nvme_spec.o 00:02:21.015 CXX test/cpp_headers/nvme_intel.o 00:02:21.015 CXX test/cpp_headers/nvme_ocssd.o 00:02:21.015 CXX test/cpp_headers/nvme_zns.o 00:02:21.015 CXX test/cpp_headers/nvmf_cmd.o 00:02:21.015 CXX test/cpp_headers/nvmf_spec.o 00:02:21.015 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:21.015 CXX test/cpp_headers/nvmf.o 00:02:21.015 CXX test/cpp_headers/opal_spec.o 00:02:21.015 CXX test/cpp_headers/opal.o 00:02:21.015 CXX test/cpp_headers/nvmf_transport.o 00:02:21.015 CXX test/cpp_headers/pci_ids.o 00:02:21.015 CXX test/cpp_headers/pipe.o 00:02:21.015 CXX test/cpp_headers/queue.o 00:02:21.015 CXX test/cpp_headers/reduce.o 00:02:21.015 CXX test/cpp_headers/rpc.o 00:02:21.015 CXX test/cpp_headers/scheduler.o 00:02:21.016 CXX test/cpp_headers/sock.o 00:02:21.016 CXX test/cpp_headers/scsi.o 00:02:21.016 CXX test/cpp_headers/scsi_spec.o 00:02:21.016 CXX test/cpp_headers/stdinc.o 00:02:21.016 CXX test/cpp_headers/string.o 00:02:21.016 CXX test/cpp_headers/thread.o 00:02:21.016 CXX test/cpp_headers/trace.o 00:02:21.016 CXX test/cpp_headers/util.o 00:02:21.016 CXX test/cpp_headers/trace_parser.o 00:02:21.016 CXX test/cpp_headers/tree.o 00:02:21.016 CXX test/cpp_headers/ublk.o 00:02:21.016 CXX test/cpp_headers/uuid.o 00:02:21.016 CXX test/cpp_headers/version.o 00:02:21.016 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.016 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.016 CXX test/cpp_headers/vhost.o 00:02:21.016 CXX test/cpp_headers/vmd.o 00:02:21.016 CXX test/cpp_headers/xor.o 00:02:21.016 CXX test/cpp_headers/zipf.o 00:02:21.016 CC test/env/vtophys/vtophys.o 00:02:21.016 CC examples/ioat/verify/verify.o 00:02:21.016 CC examples/ioat/perf/perf.o 00:02:21.016 CC test/app/jsoncat/jsoncat.o 00:02:21.016 CC test/env/memory/memory_ut.o 00:02:21.016 CC test/thread/poller_perf/poller_perf.o 00:02:21.016 CC examples/util/zipf/zipf.o 00:02:21.016 CC test/app/stub/stub.o 00:02:21.016 CC test/app/histogram_perf/histogram_perf.o 00:02:21.016 CC test/env/pci/pci_ut.o 00:02:21.016 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:21.016 LINK rpc_client_test 00:02:21.016 CC test/dma/test_dma/test_dma.o 00:02:21.016 CC app/fio/nvme/fio_plugin.o 00:02:21.283 LINK spdk_lspci 00:02:21.283 CC app/fio/bdev/fio_plugin.o 00:02:21.283 CC test/app/bdev_svc/bdev_svc.o 00:02:21.283 LINK interrupt_tgt 00:02:21.543 LINK spdk_nvme_discover 00:02:21.543 LINK nvmf_tgt 00:02:21.543 CC test/env/mem_callbacks/mem_callbacks.o 00:02:21.543 LINK iscsi_tgt 00:02:21.543 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:21.543 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:21.543 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:21.543 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:21.543 LINK spdk_trace_record 00:02:21.543 LINK spdk_tgt 00:02:21.543 LINK env_dpdk_post_init 00:02:21.802 LINK ioat_perf 00:02:21.802 LINK vtophys 00:02:21.802 LINK spdk_dd 00:02:21.802 LINK zipf 00:02:21.802 LINK spdk_trace 00:02:21.802 LINK jsoncat 00:02:21.802 LINK histogram_perf 00:02:21.802 LINK stub 00:02:21.802 LINK poller_perf 00:02:21.802 LINK bdev_svc 00:02:21.802 LINK verify 00:02:22.061 LINK test_dma 00:02:22.061 LINK nvme_fuzz 00:02:22.061 LINK vhost_fuzz 00:02:22.061 LINK pci_ut 00:02:22.061 LINK spdk_nvme 00:02:22.061 LINK spdk_nvme_identify 00:02:22.061 CC app/vhost/vhost.o 00:02:22.061 LINK spdk_top 00:02:22.061 LINK spdk_bdev 00:02:22.321 CC examples/vmd/led/led.o 00:02:22.321 CC test/event/event_perf/event_perf.o 00:02:22.321 CC examples/sock/hello_world/hello_sock.o 00:02:22.321 CC examples/idxd/perf/perf.o 00:02:22.321 CC test/event/reactor/reactor.o 00:02:22.321 LINK mem_callbacks 00:02:22.321 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.321 CC test/event/reactor_perf/reactor_perf.o 00:02:22.321 CC examples/thread/thread/thread_ex.o 00:02:22.321 CC test/event/app_repeat/app_repeat.o 00:02:22.321 LINK spdk_nvme_perf 00:02:22.321 CC test/event/scheduler/scheduler.o 00:02:22.321 LINK led 00:02:22.321 LINK vhost 00:02:22.321 LINK event_perf 00:02:22.321 LINK reactor 00:02:22.321 LINK lsvmd 00:02:22.321 LINK reactor_perf 00:02:22.582 LINK app_repeat 00:02:22.582 LINK hello_sock 00:02:22.582 LINK memory_ut 00:02:22.582 LINK idxd_perf 00:02:22.582 LINK thread 00:02:22.582 CC test/blobfs/mkfs/mkfs.o 00:02:22.582 CC test/nvme/reset/reset.o 00:02:22.582 CC test/nvme/e2edp/nvme_dp.o 00:02:22.582 CC test/nvme/connect_stress/connect_stress.o 00:02:22.582 CC test/nvme/startup/startup.o 00:02:22.582 CC test/nvme/err_injection/err_injection.o 00:02:22.582 LINK scheduler 00:02:22.582 CC test/nvme/simple_copy/simple_copy.o 00:02:22.582 CC test/nvme/cuse/cuse.o 00:02:22.582 CC test/nvme/fdp/fdp.o 00:02:22.582 CC test/nvme/compliance/nvme_compliance.o 00:02:22.582 CC test/nvme/boot_partition/boot_partition.o 00:02:22.582 CC test/nvme/reserve/reserve.o 00:02:22.582 CC test/nvme/aer/aer.o 00:02:22.582 CC test/nvme/sgl/sgl.o 00:02:22.582 CC test/nvme/overhead/overhead.o 00:02:22.582 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.582 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.582 CC test/accel/dif/dif.o 00:02:22.582 CC test/lvol/esnap/esnap.o 00:02:22.841 LINK startup 00:02:22.841 LINK connect_stress 00:02:22.841 LINK boot_partition 00:02:22.841 LINK err_injection 00:02:22.841 LINK fused_ordering 00:02:22.841 LINK mkfs 00:02:22.841 LINK reserve 00:02:22.841 LINK reset 00:02:22.841 LINK doorbell_aers 00:02:22.841 LINK simple_copy 00:02:22.841 LINK nvme_dp 00:02:22.841 LINK sgl 00:02:22.841 LINK aer 00:02:22.841 LINK overhead 00:02:22.841 LINK fdp 00:02:22.841 LINK nvme_compliance 00:02:22.841 CC examples/nvme/hello_world/hello_world.o 00:02:22.841 CC examples/nvme/reconnect/reconnect.o 00:02:22.841 CC examples/nvme/arbitration/arbitration.o 00:02:23.102 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:23.102 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:23.102 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:23.102 CC examples/nvme/abort/abort.o 00:02:23.102 CC examples/nvme/hotplug/hotplug.o 00:02:23.102 LINK dif 00:02:23.102 LINK iscsi_fuzz 00:02:23.102 CC examples/accel/perf/accel_perf.o 00:02:23.102 CC examples/blob/cli/blobcli.o 00:02:23.102 CC examples/blob/hello_world/hello_blob.o 00:02:23.102 LINK pmr_persistence 00:02:23.102 LINK cmb_copy 00:02:23.102 LINK hello_world 00:02:23.363 LINK hotplug 00:02:23.363 LINK reconnect 00:02:23.363 LINK arbitration 00:02:23.363 LINK abort 00:02:23.363 LINK nvme_manage 00:02:23.363 LINK hello_blob 00:02:23.623 LINK accel_perf 00:02:23.623 CC test/bdev/bdevio/bdevio.o 00:02:23.623 LINK blobcli 00:02:23.623 LINK cuse 00:02:23.884 LINK bdevio 00:02:24.145 CC examples/bdev/hello_world/hello_bdev.o 00:02:24.145 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.407 LINK hello_bdev 00:02:24.978 LINK bdevperf 00:02:25.551 CC examples/nvmf/nvmf/nvmf.o 00:02:25.813 LINK nvmf 00:02:26.827 LINK esnap 00:02:27.405 00:02:27.405 real 0m51.260s 00:02:27.405 user 6m34.219s 00:02:27.405 sys 4m34.598s 00:02:27.405 20:38:31 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:27.405 20:38:31 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.405 ************************************ 00:02:27.405 END TEST make 00:02:27.405 ************************************ 00:02:27.405 20:38:31 -- common/autotest_common.sh@1142 -- $ return 0 00:02:27.405 20:38:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.405 20:38:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.405 20:38:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.405 20:38:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.405 20:38:31 -- pm/common@44 -- $ pid=1242093 00:02:27.405 20:38:31 -- pm/common@50 -- $ kill -TERM 1242093 00:02:27.405 20:38:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.405 20:38:31 -- pm/common@44 -- $ pid=1242094 00:02:27.405 20:38:31 -- pm/common@50 -- $ kill -TERM 1242094 00:02:27.405 20:38:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.405 20:38:31 -- pm/common@44 -- $ pid=1242096 00:02:27.405 20:38:31 -- pm/common@50 -- $ kill -TERM 1242096 00:02:27.405 20:38:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.405 20:38:31 -- pm/common@44 -- $ pid=1242120 00:02:27.405 20:38:31 -- pm/common@50 -- $ sudo -E kill -TERM 1242120 00:02:27.405 20:38:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.405 20:38:31 -- nvmf/common.sh@7 -- # uname -s 00:02:27.405 20:38:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.405 20:38:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.405 20:38:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.405 20:38:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.405 20:38:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.405 20:38:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.405 20:38:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.405 20:38:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.405 20:38:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.405 20:38:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.405 20:38:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:27.405 20:38:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:27.405 20:38:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.405 20:38:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.405 20:38:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.405 20:38:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.405 20:38:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.405 20:38:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.405 20:38:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.405 20:38:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.405 20:38:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.405 20:38:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.405 20:38:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.405 20:38:31 -- paths/export.sh@5 -- # export PATH 00:02:27.405 20:38:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.405 20:38:31 -- nvmf/common.sh@47 -- # : 0 00:02:27.405 20:38:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:27.405 20:38:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:27.405 20:38:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.405 20:38:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.405 20:38:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.405 20:38:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:27.405 20:38:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:27.405 20:38:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:27.405 20:38:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.405 20:38:31 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.405 20:38:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.405 20:38:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.405 20:38:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.405 20:38:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.405 20:38:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.405 20:38:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.405 20:38:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.405 20:38:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.405 20:38:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.405 20:38:31 -- spdk/autotest.sh@48 -- # udevadm_pid=1305220 00:02:27.405 20:38:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.405 20:38:31 -- pm/common@17 -- # local monitor 00:02:27.405 20:38:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.405 20:38:31 -- pm/common@21 -- # date +%s 00:02:27.405 20:38:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.666 20:38:31 -- pm/common@21 -- # date +%s 00:02:27.666 20:38:31 -- pm/common@25 -- # sleep 1 00:02:27.666 20:38:31 -- pm/common@21 -- # date +%s 00:02:27.666 20:38:31 -- pm/common@21 -- # date +%s 00:02:27.666 20:38:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068711 00:02:27.666 20:38:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068711 00:02:27.666 20:38:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068711 00:02:27.666 20:38:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721068711 00:02:27.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068711_collect-cpu-temp.pm.log 00:02:27.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068711_collect-vmstat.pm.log 00:02:27.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068711_collect-cpu-load.pm.log 00:02:27.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721068711_collect-bmc-pm.bmc.pm.log 00:02:28.609 20:38:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.609 20:38:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.609 20:38:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.609 20:38:32 -- common/autotest_common.sh@10 -- # set +x 00:02:28.609 20:38:32 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.609 20:38:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.609 20:38:32 -- common/autotest_common.sh@10 -- # set +x 00:02:28.609 20:38:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.609 20:38:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.609 20:38:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.609 20:38:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.609 20:38:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.609 20:38:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.609 20:38:32 -- common/autotest_common.sh@1455 -- # uname 00:02:28.610 20:38:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:28.610 20:38:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.610 20:38:32 -- common/autotest_common.sh@1475 -- # uname 00:02:28.610 20:38:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:28.610 20:38:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:28.610 20:38:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:28.610 20:38:32 -- spdk/autotest.sh@72 -- # hash lcov 00:02:28.610 20:38:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:28.610 20:38:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:28.610 --rc lcov_branch_coverage=1 00:02:28.610 --rc lcov_function_coverage=1 00:02:28.610 --rc genhtml_branch_coverage=1 00:02:28.610 --rc genhtml_function_coverage=1 00:02:28.610 --rc genhtml_legend=1 00:02:28.610 --rc geninfo_all_blocks=1 00:02:28.610 ' 00:02:28.610 20:38:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:28.610 --rc lcov_branch_coverage=1 00:02:28.610 --rc lcov_function_coverage=1 00:02:28.610 --rc genhtml_branch_coverage=1 00:02:28.610 --rc genhtml_function_coverage=1 00:02:28.610 --rc genhtml_legend=1 00:02:28.610 --rc geninfo_all_blocks=1 00:02:28.610 ' 00:02:28.610 20:38:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:28.610 --rc lcov_branch_coverage=1 00:02:28.610 --rc lcov_function_coverage=1 00:02:28.610 --rc genhtml_branch_coverage=1 00:02:28.610 --rc genhtml_function_coverage=1 00:02:28.610 --rc genhtml_legend=1 00:02:28.610 --rc geninfo_all_blocks=1 00:02:28.610 --no-external' 00:02:28.610 20:38:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:28.610 --rc lcov_branch_coverage=1 00:02:28.610 --rc lcov_function_coverage=1 00:02:28.610 --rc genhtml_branch_coverage=1 00:02:28.610 --rc genhtml_function_coverage=1 00:02:28.610 --rc genhtml_legend=1 00:02:28.610 --rc geninfo_all_blocks=1 00:02:28.610 --no-external' 00:02:28.610 20:38:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:28.610 lcov: LCOV version 1.14 00:02:28.610 20:38:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:29.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:29.995 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:29.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:29.995 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:29.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:29.995 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:30.256 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:30.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:30.257 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:30.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:30.519 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:30.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:30.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:30.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:30.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:45.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:45.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:57.918 20:39:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:57.918 20:39:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:57.918 20:39:01 -- common/autotest_common.sh@10 -- # set +x 00:02:57.918 20:39:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:57.918 20:39:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.228 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:01.228 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:01.228 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:01.489 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:01.750 20:39:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:01.750 20:39:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:01.750 20:39:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:01.750 20:39:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:01.750 20:39:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:01.750 20:39:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:01.750 20:39:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:01.750 20:39:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.750 20:39:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:01.750 20:39:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:01.750 20:39:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.750 20:39:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:01.750 20:39:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:01.750 20:39:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:01.750 20:39:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.750 No valid GPT data, bailing 00:03:01.750 20:39:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.750 20:39:05 -- scripts/common.sh@391 -- # pt= 00:03:01.750 20:39:05 -- scripts/common.sh@392 -- # return 1 00:03:01.750 20:39:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.750 1+0 records in 00:03:01.750 1+0 records out 00:03:01.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447528 s, 234 MB/s 00:03:01.750 20:39:05 -- spdk/autotest.sh@118 -- # sync 00:03:01.750 20:39:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.750 20:39:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.750 20:39:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:09.944 20:39:13 -- spdk/autotest.sh@124 -- # uname -s 00:03:09.944 20:39:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:09.944 20:39:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.944 20:39:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.944 20:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.944 20:39:13 -- common/autotest_common.sh@10 -- # set +x 00:03:09.944 ************************************ 00:03:09.944 START TEST setup.sh 00:03:09.944 ************************************ 00:03:09.944 20:39:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.944 * Looking for test storage... 00:03:09.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.944 20:39:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:09.944 20:39:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:09.944 20:39:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.944 20:39:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.944 20:39:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.944 20:39:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.944 ************************************ 00:03:09.944 START TEST acl 00:03:09.944 ************************************ 00:03:09.944 20:39:13 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:10.207 * Looking for test storage... 00:03:10.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.207 20:39:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:10.207 20:39:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:10.207 20:39:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.207 20:39:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.410 20:39:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:14.410 20:39:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:14.410 20:39:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:14.410 20:39:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:14.410 20:39:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.410 20:39:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:17.712 Hugepages 00:03:17.712 node hugesize free / total 00:03:17.712 20:39:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.712 20:39:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.712 20:39:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 00:03:17.712 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.712 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:17.713 20:39:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:17.713 20:39:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.713 20:39:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.713 20:39:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.713 ************************************ 00:03:17.713 START TEST denied 00:03:17.713 ************************************ 00:03:17.713 20:39:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:17.713 20:39:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:17.713 20:39:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:17.713 20:39:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:17.713 20:39:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.713 20:39:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.920 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.920 20:39:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.131 00:03:26.131 real 0m8.119s 00:03:26.131 user 0m2.550s 00:03:26.131 sys 0m4.747s 00:03:26.131 20:39:29 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.131 20:39:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:26.131 ************************************ 00:03:26.131 END TEST denied 00:03:26.131 ************************************ 00:03:26.131 20:39:29 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:26.131 20:39:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:26.131 20:39:29 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.131 20:39:29 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.131 20:39:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.131 ************************************ 00:03:26.131 START TEST allowed 00:03:26.131 ************************************ 00:03:26.131 20:39:29 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:26.131 20:39:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:26.131 20:39:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:26.131 20:39:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:26.131 20:39:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.131 20:39:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.415 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:31.415 20:39:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:31.415 20:39:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:31.415 20:39:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:31.415 20:39:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.415 20:39:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.635 00:03:35.635 real 0m9.465s 00:03:35.635 user 0m2.812s 00:03:35.635 sys 0m4.962s 00:03:35.635 20:39:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.635 20:39:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:35.635 ************************************ 00:03:35.635 END TEST allowed 00:03:35.635 ************************************ 00:03:35.635 20:39:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:35.635 00:03:35.635 real 0m25.165s 00:03:35.635 user 0m8.123s 00:03:35.635 sys 0m14.690s 00:03:35.635 20:39:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.635 20:39:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:35.635 ************************************ 00:03:35.635 END TEST acl 00:03:35.635 ************************************ 00:03:35.635 20:39:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:35.635 20:39:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.635 20:39:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.635 20:39:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.635 20:39:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.635 ************************************ 00:03:35.635 START TEST hugepages 00:03:35.635 ************************************ 00:03:35.635 20:39:39 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.635 * Looking for test storage... 00:03:35.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102791168 kB' 'MemAvailable: 106278808 kB' 'Buffers: 2704 kB' 'Cached: 14482964 kB' 'SwapCached: 0 kB' 'Active: 11525468 kB' 'Inactive: 3523448 kB' 'Active(anon): 11051284 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566620 kB' 'Mapped: 168868 kB' 'Shmem: 10488036 kB' 'KReclaimable: 530952 kB' 'Slab: 1405048 kB' 'SReclaimable: 530952 kB' 'SUnreclaim: 874096 kB' 'KernelStack: 27328 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12632632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.635 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:35.636 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:35.637 20:39:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:35.637 20:39:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.637 20:39:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.637 20:39:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.637 ************************************ 00:03:35.637 START TEST default_setup 00:03:35.637 ************************************ 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.637 20:39:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.954 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.954 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:39.215 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104946588 kB' 'MemAvailable: 108434196 kB' 'Buffers: 2704 kB' 'Cached: 14483084 kB' 'SwapCached: 0 kB' 'Active: 11545672 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071488 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586796 kB' 'Mapped: 168996 kB' 'Shmem: 10488156 kB' 'KReclaimable: 530920 kB' 'Slab: 1403060 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872140 kB' 'KernelStack: 27344 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12649956 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.216 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104947312 kB' 'MemAvailable: 108434920 kB' 'Buffers: 2704 kB' 'Cached: 14483088 kB' 'SwapCached: 0 kB' 'Active: 11546032 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071848 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587228 kB' 'Mapped: 169048 kB' 'Shmem: 10488160 kB' 'KReclaimable: 530920 kB' 'Slab: 1403004 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872084 kB' 'KernelStack: 27360 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12649980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.217 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.218 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.484 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104948628 kB' 'MemAvailable: 108436236 kB' 'Buffers: 2704 kB' 'Cached: 14483104 kB' 'SwapCached: 0 kB' 'Active: 11546004 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071820 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587172 kB' 'Mapped: 169048 kB' 'Shmem: 10488176 kB' 'KReclaimable: 530920 kB' 'Slab: 1403004 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872084 kB' 'KernelStack: 27360 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12650004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.485 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.486 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.487 nr_hugepages=1024 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.487 resv_hugepages=0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.487 surplus_hugepages=0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.487 anon_hugepages=0 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104949860 kB' 'MemAvailable: 108437468 kB' 'Buffers: 2704 kB' 'Cached: 14483128 kB' 'SwapCached: 0 kB' 'Active: 11546060 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071876 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587200 kB' 'Mapped: 169048 kB' 'Shmem: 10488200 kB' 'KReclaimable: 530920 kB' 'Slab: 1403004 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872084 kB' 'KernelStack: 27360 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12650160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.487 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.488 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52489996 kB' 'MemUsed: 13169012 kB' 'SwapCached: 0 kB' 'Active: 4923208 kB' 'Inactive: 3300004 kB' 'Active(anon): 4770648 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900116 kB' 'Mapped: 86548 kB' 'AnonPages: 326504 kB' 'Shmem: 4447552 kB' 'KernelStack: 16264 kB' 'PageTables: 5400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921512 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 524028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.489 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.490 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.491 node0=1024 expecting 1024 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.491 00:03:39.491 real 0m4.017s 00:03:39.491 user 0m1.627s 00:03:39.491 sys 0m2.414s 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.491 20:39:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:39.491 ************************************ 00:03:39.491 END TEST default_setup 00:03:39.491 ************************************ 00:03:39.491 20:39:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.491 20:39:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:39.491 20:39:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.491 20:39:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.491 20:39:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.491 ************************************ 00:03:39.491 START TEST per_node_1G_alloc 00:03:39.491 ************************************ 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.491 20:39:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.791 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.791 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.791 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104969572 kB' 'MemAvailable: 108457180 kB' 'Buffers: 2704 kB' 'Cached: 14483260 kB' 'SwapCached: 0 kB' 'Active: 11545576 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071392 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586388 kB' 'Mapped: 168120 kB' 'Shmem: 10488332 kB' 'KReclaimable: 530920 kB' 'Slab: 1403460 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872540 kB' 'KernelStack: 27584 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12642032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.054 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.055 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967920 kB' 'MemAvailable: 108455528 kB' 'Buffers: 2704 kB' 'Cached: 14483264 kB' 'SwapCached: 0 kB' 'Active: 11546160 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071976 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586904 kB' 'Mapped: 168088 kB' 'Shmem: 10488336 kB' 'KReclaimable: 530920 kB' 'Slab: 1403448 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872528 kB' 'KernelStack: 27552 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.056 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.057 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.323 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967172 kB' 'MemAvailable: 108454780 kB' 'Buffers: 2704 kB' 'Cached: 14483284 kB' 'SwapCached: 0 kB' 'Active: 11545984 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071800 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586696 kB' 'Mapped: 168080 kB' 'Shmem: 10488356 kB' 'KReclaimable: 530920 kB' 'Slab: 1403472 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872552 kB' 'KernelStack: 27520 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12642076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.324 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.325 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.326 nr_hugepages=1024 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.326 resv_hugepages=0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.326 surplus_hugepages=0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.326 anon_hugepages=0 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104968424 kB' 'MemAvailable: 108456032 kB' 'Buffers: 2704 kB' 'Cached: 14483284 kB' 'SwapCached: 0 kB' 'Active: 11545504 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071320 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586220 kB' 'Mapped: 168088 kB' 'Shmem: 10488356 kB' 'KReclaimable: 530920 kB' 'Slab: 1403472 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872552 kB' 'KernelStack: 27488 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640488 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.326 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.327 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53545376 kB' 'MemUsed: 12113632 kB' 'SwapCached: 0 kB' 'Active: 4920016 kB' 'Inactive: 3300004 kB' 'Active(anon): 4767456 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900156 kB' 'Mapped: 86088 kB' 'AnonPages: 323004 kB' 'Shmem: 4447592 kB' 'KernelStack: 16184 kB' 'PageTables: 4928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 922052 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 524568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.328 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.329 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51424252 kB' 'MemUsed: 9255620 kB' 'SwapCached: 0 kB' 'Active: 6625420 kB' 'Inactive: 223444 kB' 'Active(anon): 6303796 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6585876 kB' 'Mapped: 82000 kB' 'AnonPages: 263136 kB' 'Shmem: 6040808 kB' 'KernelStack: 11144 kB' 'PageTables: 3352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133436 kB' 'Slab: 481420 kB' 'SReclaimable: 133436 kB' 'SUnreclaim: 347984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.330 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:43.331 node0=512 expecting 512 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:43.331 node1=512 expecting 512 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:43.331 00:03:43.331 real 0m3.742s 00:03:43.331 user 0m1.538s 00:03:43.331 sys 0m2.263s 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.331 20:39:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.331 ************************************ 00:03:43.331 END TEST per_node_1G_alloc 00:03:43.331 ************************************ 00:03:43.331 20:39:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:43.331 20:39:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:43.331 20:39:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.331 20:39:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.331 20:39:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.331 ************************************ 00:03:43.331 START TEST even_2G_alloc 00:03:43.331 ************************************ 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.331 20:39:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.631 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.631 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.631 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105012192 kB' 'MemAvailable: 108499800 kB' 'Buffers: 2704 kB' 'Cached: 14483440 kB' 'SwapCached: 0 kB' 'Active: 11544376 kB' 'Inactive: 3523448 kB' 'Active(anon): 11070192 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584944 kB' 'Mapped: 168116 kB' 'Shmem: 10488512 kB' 'KReclaimable: 530920 kB' 'Slab: 1403292 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872372 kB' 'KernelStack: 27328 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235700 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.895 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105013644 kB' 'MemAvailable: 108501252 kB' 'Buffers: 2704 kB' 'Cached: 14483444 kB' 'SwapCached: 0 kB' 'Active: 11543564 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069380 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584096 kB' 'Mapped: 168096 kB' 'Shmem: 10488516 kB' 'KReclaimable: 530920 kB' 'Slab: 1403144 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872224 kB' 'KernelStack: 27328 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.896 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.897 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105014372 kB' 'MemAvailable: 108501980 kB' 'Buffers: 2704 kB' 'Cached: 14483460 kB' 'SwapCached: 0 kB' 'Active: 11543520 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069336 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584040 kB' 'Mapped: 168096 kB' 'Shmem: 10488532 kB' 'KReclaimable: 530920 kB' 'Slab: 1403144 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872224 kB' 'KernelStack: 27344 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.898 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.899 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.900 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.163 nr_hugepages=1024 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.163 resv_hugepages=0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.163 surplus_hugepages=0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.163 anon_hugepages=0 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105014372 kB' 'MemAvailable: 108501980 kB' 'Buffers: 2704 kB' 'Cached: 14483460 kB' 'SwapCached: 0 kB' 'Active: 11543456 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069272 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583976 kB' 'Mapped: 168096 kB' 'Shmem: 10488532 kB' 'KReclaimable: 530920 kB' 'Slab: 1403144 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872224 kB' 'KernelStack: 27344 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12640200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.163 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.164 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53560320 kB' 'MemUsed: 12098688 kB' 'SwapCached: 0 kB' 'Active: 4917740 kB' 'Inactive: 3300004 kB' 'Active(anon): 4765180 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900172 kB' 'Mapped: 86116 kB' 'AnonPages: 320712 kB' 'Shmem: 4447608 kB' 'KernelStack: 16200 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921560 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 524076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.165 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51453976 kB' 'MemUsed: 9225896 kB' 'SwapCached: 0 kB' 'Active: 6625904 kB' 'Inactive: 223444 kB' 'Active(anon): 6304280 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6586056 kB' 'Mapped: 81980 kB' 'AnonPages: 263384 kB' 'Shmem: 6040988 kB' 'KernelStack: 11144 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133436 kB' 'Slab: 481584 kB' 'SReclaimable: 133436 kB' 'SUnreclaim: 348148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.191 node0=512 expecting 512 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:47.191 node1=512 expecting 512 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.191 00:03:47.191 real 0m3.752s 00:03:47.191 user 0m1.464s 00:03:47.191 sys 0m2.342s 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.191 20:39:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.191 ************************************ 00:03:47.191 END TEST even_2G_alloc 00:03:47.191 ************************************ 00:03:47.191 20:39:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.191 20:39:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:47.191 20:39:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.191 20:39:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.191 20:39:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.191 ************************************ 00:03:47.191 START TEST odd_alloc 00:03:47.191 ************************************ 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.191 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.192 20:39:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.503 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.503 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.503 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.503 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.503 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.503 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:50.504 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.504 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105044400 kB' 'MemAvailable: 108532008 kB' 'Buffers: 2704 kB' 'Cached: 14483620 kB' 'SwapCached: 0 kB' 'Active: 11543584 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069400 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583484 kB' 'Mapped: 168248 kB' 'Shmem: 10488692 kB' 'KReclaimable: 530920 kB' 'Slab: 1403428 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872508 kB' 'KernelStack: 27344 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12640964 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.766 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.767 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105044832 kB' 'MemAvailable: 108532440 kB' 'Buffers: 2704 kB' 'Cached: 14483624 kB' 'SwapCached: 0 kB' 'Active: 11542736 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068552 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583160 kB' 'Mapped: 168120 kB' 'Shmem: 10488696 kB' 'KReclaimable: 530920 kB' 'Slab: 1403460 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872540 kB' 'KernelStack: 27344 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12640980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.768 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.769 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105044832 kB' 'MemAvailable: 108532440 kB' 'Buffers: 2704 kB' 'Cached: 14483624 kB' 'SwapCached: 0 kB' 'Active: 11542736 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068552 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583160 kB' 'Mapped: 168120 kB' 'Shmem: 10488696 kB' 'KReclaimable: 530920 kB' 'Slab: 1403460 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872540 kB' 'KernelStack: 27344 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12641000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.770 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.771 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.035 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:51.036 nr_hugepages=1025 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.036 resv_hugepages=0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.036 surplus_hugepages=0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.036 anon_hugepages=0 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105045656 kB' 'MemAvailable: 108533264 kB' 'Buffers: 2704 kB' 'Cached: 14483660 kB' 'SwapCached: 0 kB' 'Active: 11542776 kB' 'Inactive: 3523448 kB' 'Active(anon): 11068592 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583156 kB' 'Mapped: 168120 kB' 'Shmem: 10488732 kB' 'KReclaimable: 530920 kB' 'Slab: 1403460 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872540 kB' 'KernelStack: 27344 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12641020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.036 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.037 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53578148 kB' 'MemUsed: 12080860 kB' 'SwapCached: 0 kB' 'Active: 4915884 kB' 'Inactive: 3300004 kB' 'Active(anon): 4763324 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900176 kB' 'Mapped: 86140 kB' 'AnonPages: 318784 kB' 'Shmem: 4447612 kB' 'KernelStack: 16168 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921740 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 524256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.038 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51467552 kB' 'MemUsed: 9212320 kB' 'SwapCached: 0 kB' 'Active: 6626936 kB' 'Inactive: 223444 kB' 'Active(anon): 6305312 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6586232 kB' 'Mapped: 81980 kB' 'AnonPages: 264372 kB' 'Shmem: 6041164 kB' 'KernelStack: 11176 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133436 kB' 'Slab: 481720 kB' 'SReclaimable: 133436 kB' 'SUnreclaim: 348284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.039 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:51.040 node0=512 expecting 513 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:51.040 node1=513 expecting 512 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:51.040 00:03:51.040 real 0m3.805s 00:03:51.040 user 0m1.539s 00:03:51.040 sys 0m2.322s 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.040 20:39:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.040 ************************************ 00:03:51.040 END TEST odd_alloc 00:03:51.040 ************************************ 00:03:51.040 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.040 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:51.040 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.040 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.040 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.040 ************************************ 00:03:51.040 START TEST custom_alloc 00:03:51.040 ************************************ 00:03:51.040 20:39:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:51.040 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.041 20:39:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.342 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.342 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.342 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103981032 kB' 'MemAvailable: 107468640 kB' 'Buffers: 2704 kB' 'Cached: 14483796 kB' 'SwapCached: 0 kB' 'Active: 11544636 kB' 'Inactive: 3523448 kB' 'Active(anon): 11070452 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584296 kB' 'Mapped: 168216 kB' 'Shmem: 10488868 kB' 'KReclaimable: 530920 kB' 'Slab: 1403528 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872608 kB' 'KernelStack: 27360 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12641776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.603 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.874 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.874 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.874 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.874 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.874 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.875 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103983636 kB' 'MemAvailable: 107471244 kB' 'Buffers: 2704 kB' 'Cached: 14483800 kB' 'SwapCached: 0 kB' 'Active: 11544176 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069992 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583856 kB' 'Mapped: 168216 kB' 'Shmem: 10488872 kB' 'KReclaimable: 530920 kB' 'Slab: 1403512 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872592 kB' 'KernelStack: 27328 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12641796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.876 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.877 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103983704 kB' 'MemAvailable: 107471312 kB' 'Buffers: 2704 kB' 'Cached: 14483816 kB' 'SwapCached: 0 kB' 'Active: 11543740 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069556 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583864 kB' 'Mapped: 168140 kB' 'Shmem: 10488888 kB' 'KReclaimable: 530920 kB' 'Slab: 1403480 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872560 kB' 'KernelStack: 27328 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12641816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.878 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.879 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:54.880 nr_hugepages=1536 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.880 resv_hugepages=0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.880 surplus_hugepages=0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.880 anon_hugepages=0 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103982948 kB' 'MemAvailable: 107470556 kB' 'Buffers: 2704 kB' 'Cached: 14483816 kB' 'SwapCached: 0 kB' 'Active: 11543740 kB' 'Inactive: 3523448 kB' 'Active(anon): 11069556 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583864 kB' 'Mapped: 168140 kB' 'Shmem: 10488888 kB' 'KReclaimable: 530920 kB' 'Slab: 1403480 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872560 kB' 'KernelStack: 27328 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12641840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.880 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.881 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53570740 kB' 'MemUsed: 12088268 kB' 'SwapCached: 0 kB' 'Active: 4916112 kB' 'Inactive: 3300004 kB' 'Active(anon): 4763552 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900248 kB' 'Mapped: 86160 kB' 'AnonPages: 318980 kB' 'Shmem: 4447684 kB' 'KernelStack: 16168 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921676 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 524192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.882 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50413352 kB' 'MemUsed: 10266520 kB' 'SwapCached: 0 kB' 'Active: 6627440 kB' 'Inactive: 223444 kB' 'Active(anon): 6305816 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223444 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6586308 kB' 'Mapped: 81988 kB' 'AnonPages: 264716 kB' 'Shmem: 6041240 kB' 'KernelStack: 11240 kB' 'PageTables: 3256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133436 kB' 'Slab: 481800 kB' 'SReclaimable: 133436 kB' 'SUnreclaim: 348364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.883 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.884 node0=512 expecting 512 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.884 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.885 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.885 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:54.885 node1=1024 expecting 1024 00:03:54.885 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:54.885 00:03:54.885 real 0m3.833s 00:03:54.885 user 0m1.520s 00:03:54.885 sys 0m2.371s 00:03:54.885 20:39:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.885 20:39:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.885 ************************************ 00:03:54.885 END TEST custom_alloc 00:03:54.885 ************************************ 00:03:54.885 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.885 20:39:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.885 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.885 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.885 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.885 ************************************ 00:03:54.885 START TEST no_shrink_alloc 00:03:54.885 ************************************ 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.885 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.169 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.471 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:58.471 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105073452 kB' 'MemAvailable: 108561060 kB' 'Buffers: 2704 kB' 'Cached: 14483972 kB' 'SwapCached: 0 kB' 'Active: 11546692 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072508 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586400 kB' 'Mapped: 168172 kB' 'Shmem: 10489044 kB' 'KReclaimable: 530920 kB' 'Slab: 1403128 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872208 kB' 'KernelStack: 27424 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12707396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.471 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.472 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105076584 kB' 'MemAvailable: 108564192 kB' 'Buffers: 2704 kB' 'Cached: 14483976 kB' 'SwapCached: 0 kB' 'Active: 11545428 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071244 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585604 kB' 'Mapped: 168164 kB' 'Shmem: 10489048 kB' 'KReclaimable: 530920 kB' 'Slab: 1403012 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872092 kB' 'KernelStack: 27568 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12645628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.473 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.474 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105075580 kB' 'MemAvailable: 108563188 kB' 'Buffers: 2704 kB' 'Cached: 14483992 kB' 'SwapCached: 0 kB' 'Active: 11544848 kB' 'Inactive: 3523448 kB' 'Active(anon): 11070664 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584960 kB' 'Mapped: 168184 kB' 'Shmem: 10489064 kB' 'KReclaimable: 530920 kB' 'Slab: 1403044 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872124 kB' 'KernelStack: 27456 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12645652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.475 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.476 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.740 nr_hugepages=1024 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.740 resv_hugepages=0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.740 surplus_hugepages=0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.740 anon_hugepages=0 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.740 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105072364 kB' 'MemAvailable: 108559972 kB' 'Buffers: 2704 kB' 'Cached: 14484016 kB' 'SwapCached: 0 kB' 'Active: 11545216 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071032 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585216 kB' 'Mapped: 168184 kB' 'Shmem: 10489088 kB' 'KReclaimable: 530920 kB' 'Slab: 1403044 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 872124 kB' 'KernelStack: 27536 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12645676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.741 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52544752 kB' 'MemUsed: 13114256 kB' 'SwapCached: 0 kB' 'Active: 4917584 kB' 'Inactive: 3300004 kB' 'Active(anon): 4765024 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900352 kB' 'Mapped: 86184 kB' 'AnonPages: 320388 kB' 'Shmem: 4447788 kB' 'KernelStack: 16280 kB' 'PageTables: 5280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921256 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 523772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.742 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.743 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.744 node0=1024 expecting 1024 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.744 20:40:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.043 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.043 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.043 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.043 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105070564 kB' 'MemAvailable: 108558172 kB' 'Buffers: 2704 kB' 'Cached: 14484124 kB' 'SwapCached: 0 kB' 'Active: 11546336 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072152 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585796 kB' 'Mapped: 168200 kB' 'Shmem: 10489196 kB' 'KReclaimable: 530920 kB' 'Slab: 1402836 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 871916 kB' 'KernelStack: 27520 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.313 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105068732 kB' 'MemAvailable: 108556340 kB' 'Buffers: 2704 kB' 'Cached: 14484124 kB' 'SwapCached: 0 kB' 'Active: 11547156 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072972 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586608 kB' 'Mapped: 168208 kB' 'Shmem: 10489196 kB' 'KReclaimable: 530920 kB' 'Slab: 1402828 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 871908 kB' 'KernelStack: 27520 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.314 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.315 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.316 20:40:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105069272 kB' 'MemAvailable: 108556880 kB' 'Buffers: 2704 kB' 'Cached: 14484140 kB' 'SwapCached: 0 kB' 'Active: 11546408 kB' 'Inactive: 3523448 kB' 'Active(anon): 11072224 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586308 kB' 'Mapped: 168124 kB' 'Shmem: 10489212 kB' 'KReclaimable: 530920 kB' 'Slab: 1402816 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 871896 kB' 'KernelStack: 27584 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.316 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.317 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.318 nr_hugepages=1024 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.318 resv_hugepages=0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.318 surplus_hugepages=0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.318 anon_hugepages=0 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105072236 kB' 'MemAvailable: 108559844 kB' 'Buffers: 2704 kB' 'Cached: 14484164 kB' 'SwapCached: 0 kB' 'Active: 11546128 kB' 'Inactive: 3523448 kB' 'Active(anon): 11071944 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586004 kB' 'Mapped: 168132 kB' 'Shmem: 10489236 kB' 'KReclaimable: 530920 kB' 'Slab: 1402816 kB' 'SReclaimable: 530920 kB' 'SUnreclaim: 871896 kB' 'KernelStack: 27376 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12643500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4455796 kB' 'DirectMap2M: 32972800 kB' 'DirectMap1G: 98566144 kB' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.318 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.319 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52541856 kB' 'MemUsed: 13117152 kB' 'SwapCached: 0 kB' 'Active: 4917772 kB' 'Inactive: 3300004 kB' 'Active(anon): 4765212 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300004 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7900472 kB' 'Mapped: 86132 kB' 'AnonPages: 320556 kB' 'Shmem: 4447908 kB' 'KernelStack: 16200 kB' 'PageTables: 4724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 397484 kB' 'Slab: 921432 kB' 'SReclaimable: 397484 kB' 'SUnreclaim: 523948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.320 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.321 node0=1024 expecting 1024 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.321 00:04:02.321 real 0m7.348s 00:04:02.321 user 0m2.979s 00:04:02.321 sys 0m4.456s 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.321 20:40:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.321 ************************************ 00:04:02.321 END TEST no_shrink_alloc 00:04:02.321 ************************************ 00:04:02.321 20:40:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.321 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.322 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.322 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.322 20:40:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.322 00:04:02.322 real 0m27.127s 00:04:02.322 user 0m10.921s 00:04:02.322 sys 0m16.581s 00:04:02.322 20:40:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.322 20:40:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.322 ************************************ 00:04:02.322 END TEST hugepages 00:04:02.322 ************************************ 00:04:02.322 20:40:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.322 20:40:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:02.322 20:40:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.322 20:40:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.322 20:40:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.583 ************************************ 00:04:02.583 START TEST driver 00:04:02.583 ************************************ 00:04:02.583 20:40:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:02.583 * Looking for test storage... 00:04:02.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.583 20:40:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:02.583 20:40:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.583 20:40:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.871 20:40:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.871 20:40:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.871 20:40:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.871 20:40:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.871 ************************************ 00:04:07.871 START TEST guess_driver 00:04:07.871 ************************************ 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:07.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:07.871 Looking for driver=vfio-pci 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.871 20:40:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.175 20:40:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.175 20:40:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.175 20:40:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:11.175 20:40:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.175 20:40:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.491 00:04:16.491 real 0m8.799s 00:04:16.491 user 0m2.904s 00:04:16.491 sys 0m5.139s 00:04:16.491 20:40:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.491 20:40:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.491 ************************************ 00:04:16.491 END TEST guess_driver 00:04:16.491 ************************************ 00:04:16.491 20:40:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:16.491 00:04:16.491 real 0m13.741s 00:04:16.491 user 0m4.368s 00:04:16.491 sys 0m7.832s 00:04:16.491 20:40:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.491 20:40:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.491 ************************************ 00:04:16.491 END TEST driver 00:04:16.491 ************************************ 00:04:16.491 20:40:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:16.491 20:40:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.491 20:40:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.491 20:40:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.491 20:40:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.491 ************************************ 00:04:16.491 START TEST devices 00:04:16.491 ************************************ 00:04:16.491 20:40:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.491 * Looking for test storage... 00:04:16.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.491 20:40:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.491 20:40:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:16.491 20:40:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.491 20:40:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:20.699 20:40:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:20.699 No valid GPT data, bailing 00:04:20.699 20:40:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:20.699 20:40:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:20.699 20:40:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.699 20:40:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:20.699 ************************************ 00:04:20.699 START TEST nvme_mount 00:04:20.699 ************************************ 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.699 20:40:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:21.282 Creating new GPT entries in memory. 00:04:21.282 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.282 other utilities. 00:04:21.282 20:40:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.282 20:40:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.282 20:40:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.282 20:40:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.282 20:40:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:22.286 Creating new GPT entries in memory. 00:04:22.286 The operation has completed successfully. 00:04:22.286 20:40:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.286 20:40:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.286 20:40:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1345791 00:04:22.286 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.287 20:40:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.591 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.591 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.851 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.851 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.851 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.851 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.851 20:40:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.150 20:40:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.411 20:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.833 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.095 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.095 00:04:33.095 real 0m12.909s 00:04:33.095 user 0m3.854s 00:04:33.095 sys 0m6.886s 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.095 20:40:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.095 ************************************ 00:04:33.095 END TEST nvme_mount 00:04:33.095 ************************************ 00:04:33.095 20:40:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:33.095 20:40:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.095 20:40:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.095 20:40:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.095 20:40:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.095 ************************************ 00:04:33.095 START TEST dm_mount 00:04:33.095 ************************************ 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.095 20:40:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.481 Creating new GPT entries in memory. 00:04:34.481 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.481 other utilities. 00:04:34.481 20:40:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.481 20:40:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.481 20:40:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.481 20:40:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.481 20:40:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.423 Creating new GPT entries in memory. 00:04:35.423 The operation has completed successfully. 00:04:35.423 20:40:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.423 20:40:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.423 20:40:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.423 20:40:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.423 20:40:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:36.367 The operation has completed successfully. 00:04:36.367 20:40:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.367 20:40:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.367 20:40:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1350942 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.367 20:40:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.666 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.667 20:40:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.966 20:40:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:43.226 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:43.226 00:04:43.226 real 0m10.151s 00:04:43.226 user 0m2.588s 00:04:43.226 sys 0m4.555s 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.226 20:40:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.226 ************************************ 00:04:43.226 END TEST dm_mount 00:04:43.226 ************************************ 00:04:43.499 20:40:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.499 20:40:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.758 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:43.758 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:43.758 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.758 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.758 20:40:47 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:43.758 00:04:43.758 real 0m27.357s 00:04:43.758 user 0m7.883s 00:04:43.758 sys 0m14.100s 00:04:43.758 20:40:47 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.758 20:40:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.758 ************************************ 00:04:43.758 END TEST devices 00:04:43.758 ************************************ 00:04:43.758 20:40:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.758 00:04:43.758 real 1m33.808s 00:04:43.758 user 0m31.462s 00:04:43.758 sys 0m53.477s 00:04:43.758 20:40:47 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.758 20:40:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.758 ************************************ 00:04:43.758 END TEST setup.sh 00:04:43.759 ************************************ 00:04:43.759 20:40:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.759 20:40:47 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:47.059 Hugepages 00:04:47.060 node hugesize free / total 00:04:47.060 node0 1048576kB 0 / 0 00:04:47.060 node0 2048kB 2048 / 2048 00:04:47.060 node1 1048576kB 0 / 0 00:04:47.060 node1 2048kB 0 / 0 00:04:47.060 00:04:47.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.060 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:47.060 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:47.060 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:47.060 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:47.060 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:47.320 20:40:50 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.320 20:40:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.320 20:40:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.320 20:40:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.620 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:50.621 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.534 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.794 20:40:56 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:53.736 20:40:57 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:53.736 20:40:57 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:53.736 20:40:57 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:53.736 20:40:57 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:53.736 20:40:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:53.736 20:40:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:53.736 20:40:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.736 20:40:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.736 20:40:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:53.736 20:40:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:53.736 20:40:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:53.736 20:40:57 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.038 Waiting for block devices as requested 00:04:57.038 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:57.038 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:57.299 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:57.299 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:57.299 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:57.559 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:57.559 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:57.559 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:57.820 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:57.820 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:57.820 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:58.080 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:58.080 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:58.080 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:58.341 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:58.341 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:58.341 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:58.602 20:41:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:58.602 20:41:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:58.602 20:41:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:58.602 20:41:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:58.602 20:41:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:58.602 20:41:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:58.602 20:41:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:58.602 20:41:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:58.602 20:41:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:58.602 20:41:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:58.602 20:41:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:58.602 20:41:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:58.602 20:41:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:58.602 20:41:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:58.602 20:41:02 -- common/autotest_common.sh@1557 -- # continue 00:04:58.602 20:41:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:58.602 20:41:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.602 20:41:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.602 20:41:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:58.602 20:41:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.602 20:41:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.602 20:41:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.909 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:01.909 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:02.170 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:02.743 20:41:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:02.743 20:41:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.743 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:05:02.743 20:41:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:02.743 20:41:06 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:02.743 20:41:06 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.743 20:41:06 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:02.743 20:41:06 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:02.743 20:41:06 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:02.743 20:41:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:02.743 20:41:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:02.743 20:41:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.743 20:41:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.743 20:41:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:02.743 20:41:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:02.743 20:41:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:02.743 20:41:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.743 20:41:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:02.743 20:41:06 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:02.743 20:41:06 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:02.743 20:41:06 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:02.743 20:41:06 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:02.743 20:41:06 -- common/autotest_common.sh@1593 -- # return 0 00:05:02.743 20:41:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:02.743 20:41:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:02.743 20:41:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.743 20:41:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.743 20:41:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:02.743 20:41:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.743 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:05:02.743 20:41:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:02.743 20:41:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.743 20:41:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.743 20:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.743 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:05:02.743 ************************************ 00:05:02.743 START TEST env 00:05:02.743 ************************************ 00:05:02.743 20:41:06 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.743 * Looking for test storage... 00:05:02.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:02.743 20:41:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.743 20:41:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.743 20:41:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.743 20:41:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.004 ************************************ 00:05:03.004 START TEST env_memory 00:05:03.004 ************************************ 00:05:03.004 20:41:06 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.004 00:05:03.004 00:05:03.004 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.004 http://cunit.sourceforge.net/ 00:05:03.004 00:05:03.004 00:05:03.004 Suite: memory 00:05:03.004 Test: alloc and free memory map ...[2024-07-15 20:41:06.719527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.004 passed 00:05:03.004 Test: mem map translation ...[2024-07-15 20:41:06.747766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.004 [2024-07-15 20:41:06.747797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.004 [2024-07-15 20:41:06.747845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.004 [2024-07-15 20:41:06.747852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.004 passed 00:05:03.004 Test: mem map registration ...[2024-07-15 20:41:06.808114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:03.004 [2024-07-15 20:41:06.808144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:03.004 passed 00:05:03.004 Test: mem map adjacent registrations ...passed 00:05:03.004 00:05:03.004 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.004 suites 1 1 n/a 0 0 00:05:03.004 tests 4 4 4 0 0 00:05:03.004 asserts 152 152 152 0 n/a 00:05:03.004 00:05:03.004 Elapsed time = 0.202 seconds 00:05:03.004 00:05:03.004 real 0m0.217s 00:05:03.004 user 0m0.202s 00:05:03.004 sys 0m0.014s 00:05:03.004 20:41:06 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.004 20:41:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.004 ************************************ 00:05:03.004 END TEST env_memory 00:05:03.004 ************************************ 00:05:03.265 20:41:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:03.265 20:41:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.265 20:41:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.265 20:41:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.265 20:41:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 ************************************ 00:05:03.265 START TEST env_vtophys 00:05:03.265 ************************************ 00:05:03.265 20:41:06 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.265 EAL: lib.eal log level changed from notice to debug 00:05:03.265 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.265 EAL: Detected lcore 1 as core 1 on socket 0 00:05:03.265 EAL: Detected lcore 2 as core 2 on socket 0 00:05:03.265 EAL: Detected lcore 3 as core 3 on socket 0 00:05:03.265 EAL: Detected lcore 4 as core 4 on socket 0 00:05:03.265 EAL: Detected lcore 5 as core 5 on socket 0 00:05:03.265 EAL: Detected lcore 6 as core 6 on socket 0 00:05:03.265 EAL: Detected lcore 7 as core 7 on socket 0 00:05:03.265 EAL: Detected lcore 8 as core 8 on socket 0 00:05:03.265 EAL: Detected lcore 9 as core 9 on socket 0 00:05:03.265 EAL: Detected lcore 10 as core 10 on socket 0 00:05:03.265 EAL: Detected lcore 11 as core 11 on socket 0 00:05:03.265 EAL: Detected lcore 12 as core 12 on socket 0 00:05:03.265 EAL: Detected lcore 13 as core 13 on socket 0 00:05:03.265 EAL: Detected lcore 14 as core 14 on socket 0 00:05:03.265 EAL: Detected lcore 15 as core 15 on socket 0 00:05:03.265 EAL: Detected lcore 16 as core 16 on socket 0 00:05:03.265 EAL: Detected lcore 17 as core 17 on socket 0 00:05:03.265 EAL: Detected lcore 18 as core 18 on socket 0 00:05:03.265 EAL: Detected lcore 19 as core 19 on socket 0 00:05:03.265 EAL: Detected lcore 20 as core 20 on socket 0 00:05:03.265 EAL: Detected lcore 21 as core 21 on socket 0 00:05:03.265 EAL: Detected lcore 22 as core 22 on socket 0 00:05:03.265 EAL: Detected lcore 23 as core 23 on socket 0 00:05:03.265 EAL: Detected lcore 24 as core 24 on socket 0 00:05:03.265 EAL: Detected lcore 25 as core 25 on socket 0 00:05:03.265 EAL: Detected lcore 26 as core 26 on socket 0 00:05:03.265 EAL: Detected lcore 27 as core 27 on socket 0 00:05:03.265 EAL: Detected lcore 28 as core 28 on socket 0 00:05:03.265 EAL: Detected lcore 29 as core 29 on socket 0 00:05:03.265 EAL: Detected lcore 30 as core 30 on socket 0 00:05:03.265 EAL: Detected lcore 31 as core 31 on socket 0 00:05:03.265 EAL: Detected lcore 32 as core 32 on socket 0 00:05:03.265 EAL: Detected lcore 33 as core 33 on socket 0 00:05:03.265 EAL: Detected lcore 34 as core 34 on socket 0 00:05:03.265 EAL: Detected lcore 35 as core 35 on socket 0 00:05:03.265 EAL: Detected lcore 36 as core 0 on socket 1 00:05:03.265 EAL: Detected lcore 37 as core 1 on socket 1 00:05:03.265 EAL: Detected lcore 38 as core 2 on socket 1 00:05:03.265 EAL: Detected lcore 39 as core 3 on socket 1 00:05:03.265 EAL: Detected lcore 40 as core 4 on socket 1 00:05:03.265 EAL: Detected lcore 41 as core 5 on socket 1 00:05:03.265 EAL: Detected lcore 42 as core 6 on socket 1 00:05:03.265 EAL: Detected lcore 43 as core 7 on socket 1 00:05:03.265 EAL: Detected lcore 44 as core 8 on socket 1 00:05:03.265 EAL: Detected lcore 45 as core 9 on socket 1 00:05:03.265 EAL: Detected lcore 46 as core 10 on socket 1 00:05:03.265 EAL: Detected lcore 47 as core 11 on socket 1 00:05:03.265 EAL: Detected lcore 48 as core 12 on socket 1 00:05:03.265 EAL: Detected lcore 49 as core 13 on socket 1 00:05:03.265 EAL: Detected lcore 50 as core 14 on socket 1 00:05:03.265 EAL: Detected lcore 51 as core 15 on socket 1 00:05:03.265 EAL: Detected lcore 52 as core 16 on socket 1 00:05:03.265 EAL: Detected lcore 53 as core 17 on socket 1 00:05:03.265 EAL: Detected lcore 54 as core 18 on socket 1 00:05:03.265 EAL: Detected lcore 55 as core 19 on socket 1 00:05:03.265 EAL: Detected lcore 56 as core 20 on socket 1 00:05:03.265 EAL: Detected lcore 57 as core 21 on socket 1 00:05:03.265 EAL: Detected lcore 58 as core 22 on socket 1 00:05:03.265 EAL: Detected lcore 59 as core 23 on socket 1 00:05:03.265 EAL: Detected lcore 60 as core 24 on socket 1 00:05:03.265 EAL: Detected lcore 61 as core 25 on socket 1 00:05:03.265 EAL: Detected lcore 62 as core 26 on socket 1 00:05:03.265 EAL: Detected lcore 63 as core 27 on socket 1 00:05:03.265 EAL: Detected lcore 64 as core 28 on socket 1 00:05:03.265 EAL: Detected lcore 65 as core 29 on socket 1 00:05:03.265 EAL: Detected lcore 66 as core 30 on socket 1 00:05:03.265 EAL: Detected lcore 67 as core 31 on socket 1 00:05:03.265 EAL: Detected lcore 68 as core 32 on socket 1 00:05:03.265 EAL: Detected lcore 69 as core 33 on socket 1 00:05:03.265 EAL: Detected lcore 70 as core 34 on socket 1 00:05:03.265 EAL: Detected lcore 71 as core 35 on socket 1 00:05:03.265 EAL: Detected lcore 72 as core 0 on socket 0 00:05:03.265 EAL: Detected lcore 73 as core 1 on socket 0 00:05:03.265 EAL: Detected lcore 74 as core 2 on socket 0 00:05:03.265 EAL: Detected lcore 75 as core 3 on socket 0 00:05:03.265 EAL: Detected lcore 76 as core 4 on socket 0 00:05:03.265 EAL: Detected lcore 77 as core 5 on socket 0 00:05:03.265 EAL: Detected lcore 78 as core 6 on socket 0 00:05:03.265 EAL: Detected lcore 79 as core 7 on socket 0 00:05:03.265 EAL: Detected lcore 80 as core 8 on socket 0 00:05:03.265 EAL: Detected lcore 81 as core 9 on socket 0 00:05:03.265 EAL: Detected lcore 82 as core 10 on socket 0 00:05:03.265 EAL: Detected lcore 83 as core 11 on socket 0 00:05:03.265 EAL: Detected lcore 84 as core 12 on socket 0 00:05:03.265 EAL: Detected lcore 85 as core 13 on socket 0 00:05:03.265 EAL: Detected lcore 86 as core 14 on socket 0 00:05:03.265 EAL: Detected lcore 87 as core 15 on socket 0 00:05:03.265 EAL: Detected lcore 88 as core 16 on socket 0 00:05:03.265 EAL: Detected lcore 89 as core 17 on socket 0 00:05:03.265 EAL: Detected lcore 90 as core 18 on socket 0 00:05:03.265 EAL: Detected lcore 91 as core 19 on socket 0 00:05:03.265 EAL: Detected lcore 92 as core 20 on socket 0 00:05:03.265 EAL: Detected lcore 93 as core 21 on socket 0 00:05:03.265 EAL: Detected lcore 94 as core 22 on socket 0 00:05:03.265 EAL: Detected lcore 95 as core 23 on socket 0 00:05:03.265 EAL: Detected lcore 96 as core 24 on socket 0 00:05:03.265 EAL: Detected lcore 97 as core 25 on socket 0 00:05:03.265 EAL: Detected lcore 98 as core 26 on socket 0 00:05:03.265 EAL: Detected lcore 99 as core 27 on socket 0 00:05:03.265 EAL: Detected lcore 100 as core 28 on socket 0 00:05:03.265 EAL: Detected lcore 101 as core 29 on socket 0 00:05:03.265 EAL: Detected lcore 102 as core 30 on socket 0 00:05:03.265 EAL: Detected lcore 103 as core 31 on socket 0 00:05:03.265 EAL: Detected lcore 104 as core 32 on socket 0 00:05:03.265 EAL: Detected lcore 105 as core 33 on socket 0 00:05:03.265 EAL: Detected lcore 106 as core 34 on socket 0 00:05:03.265 EAL: Detected lcore 107 as core 35 on socket 0 00:05:03.265 EAL: Detected lcore 108 as core 0 on socket 1 00:05:03.265 EAL: Detected lcore 109 as core 1 on socket 1 00:05:03.265 EAL: Detected lcore 110 as core 2 on socket 1 00:05:03.265 EAL: Detected lcore 111 as core 3 on socket 1 00:05:03.265 EAL: Detected lcore 112 as core 4 on socket 1 00:05:03.265 EAL: Detected lcore 113 as core 5 on socket 1 00:05:03.266 EAL: Detected lcore 114 as core 6 on socket 1 00:05:03.266 EAL: Detected lcore 115 as core 7 on socket 1 00:05:03.266 EAL: Detected lcore 116 as core 8 on socket 1 00:05:03.266 EAL: Detected lcore 117 as core 9 on socket 1 00:05:03.266 EAL: Detected lcore 118 as core 10 on socket 1 00:05:03.266 EAL: Detected lcore 119 as core 11 on socket 1 00:05:03.266 EAL: Detected lcore 120 as core 12 on socket 1 00:05:03.266 EAL: Detected lcore 121 as core 13 on socket 1 00:05:03.266 EAL: Detected lcore 122 as core 14 on socket 1 00:05:03.266 EAL: Detected lcore 123 as core 15 on socket 1 00:05:03.266 EAL: Detected lcore 124 as core 16 on socket 1 00:05:03.266 EAL: Detected lcore 125 as core 17 on socket 1 00:05:03.266 EAL: Detected lcore 126 as core 18 on socket 1 00:05:03.266 EAL: Detected lcore 127 as core 19 on socket 1 00:05:03.266 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:03.266 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:03.266 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:03.266 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:03.266 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:03.266 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:03.266 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:03.266 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:03.266 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:03.266 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:03.266 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:03.266 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:03.266 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:03.266 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:03.266 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:03.266 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:03.266 EAL: Maximum logical cores by configuration: 128 00:05:03.266 EAL: Detected CPU lcores: 128 00:05:03.266 EAL: Detected NUMA nodes: 2 00:05:03.266 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.266 EAL: Detected shared linkage of DPDK 00:05:03.266 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.266 EAL: Bus pci wants IOVA as 'DC' 00:05:03.266 EAL: Buses did not request a specific IOVA mode. 00:05:03.266 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:03.266 EAL: Selected IOVA mode 'VA' 00:05:03.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.266 EAL: Probing VFIO support... 00:05:03.266 EAL: IOMMU type 1 (Type 1) is supported 00:05:03.266 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:03.266 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:03.266 EAL: VFIO support initialized 00:05:03.266 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.266 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.266 EAL: Setting up physically contiguous memory... 00:05:03.266 EAL: Setting maximum number of open files to 524288 00:05:03.266 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.266 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:03.266 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.266 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:03.266 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.266 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:03.266 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.266 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.266 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:03.266 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:03.266 EAL: Hugepages will be freed exactly as allocated. 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: TSC frequency is ~2400000 KHz 00:05:03.266 EAL: Main lcore 0 is ready (tid=7f8fe23a2a00;cpuset=[0]) 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 0 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.266 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.266 00:05:03.266 00:05:03.266 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.266 http://cunit.sourceforge.net/ 00:05:03.266 00:05:03.266 00:05:03.266 Suite: components_suite 00:05:03.266 Test: vtophys_malloc_test ...passed 00:05:03.266 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.266 EAL: Restoring previous memory policy: 4 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.266 EAL: request: mp_malloc_sync 00:05:03.266 EAL: No shared files mode enabled, IPC is disabled 00:05:03.266 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.266 EAL: Trying to obtain current memory policy. 00:05:03.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.526 EAL: Restoring previous memory policy: 4 00:05:03.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.527 EAL: request: mp_malloc_sync 00:05:03.527 EAL: No shared files mode enabled, IPC is disabled 00:05:03.527 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.527 EAL: request: mp_malloc_sync 00:05:03.527 EAL: No shared files mode enabled, IPC is disabled 00:05:03.527 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.527 EAL: Trying to obtain current memory policy. 00:05:03.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.527 EAL: Restoring previous memory policy: 4 00:05:03.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.527 EAL: request: mp_malloc_sync 00:05:03.527 EAL: No shared files mode enabled, IPC is disabled 00:05:03.527 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.527 EAL: request: mp_malloc_sync 00:05:03.527 EAL: No shared files mode enabled, IPC is disabled 00:05:03.527 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.527 EAL: Trying to obtain current memory policy. 00:05:03.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.787 EAL: Restoring previous memory policy: 4 00:05:03.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.787 EAL: request: mp_malloc_sync 00:05:03.787 EAL: No shared files mode enabled, IPC is disabled 00:05:03.787 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.048 EAL: request: mp_malloc_sync 00:05:04.048 EAL: No shared files mode enabled, IPC is disabled 00:05:04.048 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:04.048 passed 00:05:04.048 00:05:04.048 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.048 suites 1 1 n/a 0 0 00:05:04.048 tests 2 2 2 0 0 00:05:04.048 asserts 497 497 497 0 n/a 00:05:04.048 00:05:04.048 Elapsed time = 0.655 seconds 00:05:04.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.048 EAL: request: mp_malloc_sync 00:05:04.048 EAL: No shared files mode enabled, IPC is disabled 00:05:04.048 EAL: Heap on socket 0 was shrunk by 2MB 00:05:04.048 EAL: No shared files mode enabled, IPC is disabled 00:05:04.048 EAL: No shared files mode enabled, IPC is disabled 00:05:04.048 EAL: No shared files mode enabled, IPC is disabled 00:05:04.048 00:05:04.048 real 0m0.777s 00:05:04.048 user 0m0.403s 00:05:04.048 sys 0m0.345s 00:05:04.048 20:41:07 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.048 20:41:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:04.048 ************************************ 00:05:04.048 END TEST env_vtophys 00:05:04.048 ************************************ 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.048 20:41:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.048 20:41:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.048 ************************************ 00:05:04.048 START TEST env_pci 00:05:04.048 ************************************ 00:05:04.048 20:41:07 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.048 00:05:04.048 00:05:04.048 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.048 http://cunit.sourceforge.net/ 00:05:04.048 00:05:04.048 00:05:04.048 Suite: pci 00:05:04.048 Test: pci_hook ...[2024-07-15 20:41:07.836807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1362016 has claimed it 00:05:04.048 EAL: Cannot find device (10000:00:01.0) 00:05:04.048 EAL: Failed to attach device on primary process 00:05:04.048 passed 00:05:04.048 00:05:04.048 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.048 suites 1 1 n/a 0 0 00:05:04.048 tests 1 1 1 0 0 00:05:04.048 asserts 25 25 25 0 n/a 00:05:04.048 00:05:04.048 Elapsed time = 0.035 seconds 00:05:04.048 00:05:04.048 real 0m0.055s 00:05:04.048 user 0m0.012s 00:05:04.048 sys 0m0.043s 00:05:04.048 20:41:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.048 20:41:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.048 ************************************ 00:05:04.048 END TEST env_pci 00:05:04.048 ************************************ 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.048 20:41:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.048 20:41:07 env -- env/env.sh@15 -- # uname 00:05:04.048 20:41:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.048 20:41:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.048 20:41:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:04.048 20:41:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.048 20:41:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.308 ************************************ 00:05:04.308 START TEST env_dpdk_post_init 00:05:04.308 ************************************ 00:05:04.308 20:41:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.308 EAL: Detected CPU lcores: 128 00:05:04.308 EAL: Detected NUMA nodes: 2 00:05:04.308 EAL: Detected shared linkage of DPDK 00:05:04.308 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.308 EAL: Selected IOVA mode 'VA' 00:05:04.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.308 EAL: VFIO support initialized 00:05:04.308 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.308 EAL: Using IOMMU type 1 (Type 1) 00:05:04.308 EAL: Ignore mapping IO port bar(1) 00:05:04.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:04.627 EAL: Ignore mapping IO port bar(1) 00:05:04.627 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:04.888 EAL: Ignore mapping IO port bar(1) 00:05:04.888 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:05.149 EAL: Ignore mapping IO port bar(1) 00:05:05.149 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:05.410 EAL: Ignore mapping IO port bar(1) 00:05:05.410 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:05.671 EAL: Ignore mapping IO port bar(1) 00:05:05.671 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:05.671 EAL: Ignore mapping IO port bar(1) 00:05:05.932 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:05.932 EAL: Ignore mapping IO port bar(1) 00:05:06.193 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:06.453 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:06.453 EAL: Ignore mapping IO port bar(1) 00:05:06.453 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:06.744 EAL: Ignore mapping IO port bar(1) 00:05:06.744 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:07.035 EAL: Ignore mapping IO port bar(1) 00:05:07.035 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:07.035 EAL: Ignore mapping IO port bar(1) 00:05:07.296 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:07.296 EAL: Ignore mapping IO port bar(1) 00:05:07.557 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:07.557 EAL: Ignore mapping IO port bar(1) 00:05:07.557 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:07.818 EAL: Ignore mapping IO port bar(1) 00:05:07.818 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:08.080 EAL: Ignore mapping IO port bar(1) 00:05:08.080 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:08.080 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:08.080 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:08.341 Starting DPDK initialization... 00:05:08.341 Starting SPDK post initialization... 00:05:08.341 SPDK NVMe probe 00:05:08.341 Attaching to 0000:65:00.0 00:05:08.341 Attached to 0000:65:00.0 00:05:08.341 Cleaning up... 00:05:10.252 00:05:10.252 real 0m5.712s 00:05:10.252 user 0m0.185s 00:05:10.252 sys 0m0.070s 00:05:10.252 20:41:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.252 20:41:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 END TEST env_dpdk_post_init 00:05:10.252 ************************************ 00:05:10.252 20:41:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.252 20:41:13 env -- env/env.sh@26 -- # uname 00:05:10.252 20:41:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.252 20:41:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.252 20:41:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.252 20:41:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.252 20:41:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 START TEST env_mem_callbacks 00:05:10.252 ************************************ 00:05:10.252 20:41:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.252 EAL: Detected CPU lcores: 128 00:05:10.252 EAL: Detected NUMA nodes: 2 00:05:10.252 EAL: Detected shared linkage of DPDK 00:05:10.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.252 EAL: Selected IOVA mode 'VA' 00:05:10.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.252 EAL: VFIO support initialized 00:05:10.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.252 00:05:10.252 00:05:10.252 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.252 http://cunit.sourceforge.net/ 00:05:10.252 00:05:10.252 00:05:10.252 Suite: memory 00:05:10.252 Test: test ... 00:05:10.252 register 0x200000200000 2097152 00:05:10.252 malloc 3145728 00:05:10.252 register 0x200000400000 4194304 00:05:10.252 buf 0x200000500000 len 3145728 PASSED 00:05:10.252 malloc 64 00:05:10.252 buf 0x2000004fff40 len 64 PASSED 00:05:10.252 malloc 4194304 00:05:10.252 register 0x200000800000 6291456 00:05:10.252 buf 0x200000a00000 len 4194304 PASSED 00:05:10.252 free 0x200000500000 3145728 00:05:10.252 free 0x2000004fff40 64 00:05:10.252 unregister 0x200000400000 4194304 PASSED 00:05:10.252 free 0x200000a00000 4194304 00:05:10.252 unregister 0x200000800000 6291456 PASSED 00:05:10.252 malloc 8388608 00:05:10.252 register 0x200000400000 10485760 00:05:10.252 buf 0x200000600000 len 8388608 PASSED 00:05:10.252 free 0x200000600000 8388608 00:05:10.252 unregister 0x200000400000 10485760 PASSED 00:05:10.252 passed 00:05:10.252 00:05:10.252 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.252 suites 1 1 n/a 0 0 00:05:10.252 tests 1 1 1 0 0 00:05:10.252 asserts 15 15 15 0 n/a 00:05:10.252 00:05:10.252 Elapsed time = 0.008 seconds 00:05:10.252 00:05:10.252 real 0m0.065s 00:05:10.252 user 0m0.026s 00:05:10.252 sys 0m0.038s 00:05:10.252 20:41:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.252 20:41:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 END TEST env_mem_callbacks 00:05:10.252 ************************************ 00:05:10.252 20:41:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.252 00:05:10.252 real 0m7.342s 00:05:10.252 user 0m1.021s 00:05:10.252 sys 0m0.862s 00:05:10.252 20:41:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.252 20:41:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 END TEST env 00:05:10.252 ************************************ 00:05:10.252 20:41:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.252 20:41:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.252 20:41:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.252 20:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.252 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 ************************************ 00:05:10.252 START TEST rpc 00:05:10.252 ************************************ 00:05:10.252 20:41:13 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.252 * Looking for test storage... 00:05:10.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.252 20:41:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1363466 00:05:10.252 20:41:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.252 20:41:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.252 20:41:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1363466 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@829 -- # '[' -z 1363466 ']' 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.252 20:41:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.252 [2024-07-15 20:41:14.100079] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:10.252 [2024-07-15 20:41:14.100137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363466 ] 00:05:10.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.512 [2024-07-15 20:41:14.160629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.512 [2024-07-15 20:41:14.226523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.512 [2024-07-15 20:41:14.226559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1363466' to capture a snapshot of events at runtime. 00:05:10.512 [2024-07-15 20:41:14.226567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.512 [2024-07-15 20:41:14.226573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.512 [2024-07-15 20:41:14.226579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1363466 for offline analysis/debug. 00:05:10.512 [2024-07-15 20:41:14.226600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.083 20:41:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.083 20:41:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:11.083 20:41:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.083 20:41:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.083 20:41:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.083 20:41:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.083 20:41:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.083 20:41:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.083 20:41:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.083 ************************************ 00:05:11.083 START TEST rpc_integrity 00:05:11.083 ************************************ 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.083 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.083 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 20:41:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.344 { 00:05:11.344 "name": "Malloc0", 00:05:11.344 "aliases": [ 00:05:11.344 "c79cc957-3b3d-47a3-b830-cf977e320786" 00:05:11.344 ], 00:05:11.344 "product_name": "Malloc disk", 00:05:11.344 "block_size": 512, 00:05:11.344 "num_blocks": 16384, 00:05:11.344 "uuid": "c79cc957-3b3d-47a3-b830-cf977e320786", 00:05:11.344 "assigned_rate_limits": { 00:05:11.344 "rw_ios_per_sec": 0, 00:05:11.344 "rw_mbytes_per_sec": 0, 00:05:11.344 "r_mbytes_per_sec": 0, 00:05:11.344 "w_mbytes_per_sec": 0 00:05:11.344 }, 00:05:11.344 "claimed": false, 00:05:11.344 "zoned": false, 00:05:11.344 "supported_io_types": { 00:05:11.344 "read": true, 00:05:11.344 "write": true, 00:05:11.344 "unmap": true, 00:05:11.344 "flush": true, 00:05:11.344 "reset": true, 00:05:11.344 "nvme_admin": false, 00:05:11.344 "nvme_io": false, 00:05:11.344 "nvme_io_md": false, 00:05:11.344 "write_zeroes": true, 00:05:11.344 "zcopy": true, 00:05:11.344 "get_zone_info": false, 00:05:11.344 "zone_management": false, 00:05:11.344 "zone_append": false, 00:05:11.344 "compare": false, 00:05:11.344 "compare_and_write": false, 00:05:11.344 "abort": true, 00:05:11.344 "seek_hole": false, 00:05:11.344 "seek_data": false, 00:05:11.344 "copy": true, 00:05:11.344 "nvme_iov_md": false 00:05:11.344 }, 00:05:11.344 "memory_domains": [ 00:05:11.344 { 00:05:11.344 "dma_device_id": "system", 00:05:11.344 "dma_device_type": 1 00:05:11.344 }, 00:05:11.344 { 00:05:11.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.344 "dma_device_type": 2 00:05:11.344 } 00:05:11.344 ], 00:05:11.344 "driver_specific": {} 00:05:11.344 } 00:05:11.344 ]' 00:05:11.344 20:41:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 [2024-07-15 20:41:15.038323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.344 [2024-07-15 20:41:15.038357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.344 [2024-07-15 20:41:15.038370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf11d80 00:05:11.344 [2024-07-15 20:41:15.038377] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.344 [2024-07-15 20:41:15.039720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.344 [2024-07-15 20:41:15.039741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.344 Passthru0 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.344 { 00:05:11.344 "name": "Malloc0", 00:05:11.344 "aliases": [ 00:05:11.344 "c79cc957-3b3d-47a3-b830-cf977e320786" 00:05:11.344 ], 00:05:11.344 "product_name": "Malloc disk", 00:05:11.344 "block_size": 512, 00:05:11.344 "num_blocks": 16384, 00:05:11.344 "uuid": "c79cc957-3b3d-47a3-b830-cf977e320786", 00:05:11.344 "assigned_rate_limits": { 00:05:11.344 "rw_ios_per_sec": 0, 00:05:11.344 "rw_mbytes_per_sec": 0, 00:05:11.344 "r_mbytes_per_sec": 0, 00:05:11.344 "w_mbytes_per_sec": 0 00:05:11.344 }, 00:05:11.344 "claimed": true, 00:05:11.344 "claim_type": "exclusive_write", 00:05:11.344 "zoned": false, 00:05:11.344 "supported_io_types": { 00:05:11.344 "read": true, 00:05:11.344 "write": true, 00:05:11.344 "unmap": true, 00:05:11.344 "flush": true, 00:05:11.344 "reset": true, 00:05:11.344 "nvme_admin": false, 00:05:11.344 "nvme_io": false, 00:05:11.344 "nvme_io_md": false, 00:05:11.344 "write_zeroes": true, 00:05:11.344 "zcopy": true, 00:05:11.344 "get_zone_info": false, 00:05:11.344 "zone_management": false, 00:05:11.344 "zone_append": false, 00:05:11.344 "compare": false, 00:05:11.344 "compare_and_write": false, 00:05:11.344 "abort": true, 00:05:11.344 "seek_hole": false, 00:05:11.344 "seek_data": false, 00:05:11.344 "copy": true, 00:05:11.344 "nvme_iov_md": false 00:05:11.344 }, 00:05:11.344 "memory_domains": [ 00:05:11.344 { 00:05:11.344 "dma_device_id": "system", 00:05:11.344 "dma_device_type": 1 00:05:11.344 }, 00:05:11.344 { 00:05:11.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.344 "dma_device_type": 2 00:05:11.344 } 00:05:11.344 ], 00:05:11.344 "driver_specific": {} 00:05:11.344 }, 00:05:11.344 { 00:05:11.344 "name": "Passthru0", 00:05:11.344 "aliases": [ 00:05:11.344 "4c2e1852-4ee6-55dd-9473-3ce0195c6826" 00:05:11.344 ], 00:05:11.344 "product_name": "passthru", 00:05:11.344 "block_size": 512, 00:05:11.344 "num_blocks": 16384, 00:05:11.344 "uuid": "4c2e1852-4ee6-55dd-9473-3ce0195c6826", 00:05:11.344 "assigned_rate_limits": { 00:05:11.344 "rw_ios_per_sec": 0, 00:05:11.344 "rw_mbytes_per_sec": 0, 00:05:11.344 "r_mbytes_per_sec": 0, 00:05:11.344 "w_mbytes_per_sec": 0 00:05:11.344 }, 00:05:11.344 "claimed": false, 00:05:11.344 "zoned": false, 00:05:11.344 "supported_io_types": { 00:05:11.344 "read": true, 00:05:11.344 "write": true, 00:05:11.344 "unmap": true, 00:05:11.344 "flush": true, 00:05:11.344 "reset": true, 00:05:11.344 "nvme_admin": false, 00:05:11.344 "nvme_io": false, 00:05:11.344 "nvme_io_md": false, 00:05:11.344 "write_zeroes": true, 00:05:11.344 "zcopy": true, 00:05:11.344 "get_zone_info": false, 00:05:11.344 "zone_management": false, 00:05:11.344 "zone_append": false, 00:05:11.344 "compare": false, 00:05:11.344 "compare_and_write": false, 00:05:11.344 "abort": true, 00:05:11.344 "seek_hole": false, 00:05:11.344 "seek_data": false, 00:05:11.344 "copy": true, 00:05:11.344 "nvme_iov_md": false 00:05:11.344 }, 00:05:11.344 "memory_domains": [ 00:05:11.344 { 00:05:11.344 "dma_device_id": "system", 00:05:11.344 "dma_device_type": 1 00:05:11.344 }, 00:05:11.344 { 00:05:11.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.344 "dma_device_type": 2 00:05:11.344 } 00:05:11.344 ], 00:05:11.344 "driver_specific": { 00:05:11.344 "passthru": { 00:05:11.344 "name": "Passthru0", 00:05:11.344 "base_bdev_name": "Malloc0" 00:05:11.344 } 00:05:11.344 } 00:05:11.344 } 00:05:11.344 ]' 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.344 20:41:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.344 00:05:11.344 real 0m0.302s 00:05:11.344 user 0m0.196s 00:05:11.344 sys 0m0.037s 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.344 20:41:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.344 ************************************ 00:05:11.344 END TEST rpc_integrity 00:05:11.344 ************************************ 00:05:11.344 20:41:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.344 20:41:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.344 20:41:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.344 20:41:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.344 20:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.605 ************************************ 00:05:11.605 START TEST rpc_plugins 00:05:11.605 ************************************ 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:11.605 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.605 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.605 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.605 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.605 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.605 { 00:05:11.605 "name": "Malloc1", 00:05:11.605 "aliases": [ 00:05:11.605 "8dc34290-9698-4746-994c-000024ce9998" 00:05:11.605 ], 00:05:11.605 "product_name": "Malloc disk", 00:05:11.605 "block_size": 4096, 00:05:11.605 "num_blocks": 256, 00:05:11.605 "uuid": "8dc34290-9698-4746-994c-000024ce9998", 00:05:11.605 "assigned_rate_limits": { 00:05:11.605 "rw_ios_per_sec": 0, 00:05:11.605 "rw_mbytes_per_sec": 0, 00:05:11.605 "r_mbytes_per_sec": 0, 00:05:11.605 "w_mbytes_per_sec": 0 00:05:11.605 }, 00:05:11.605 "claimed": false, 00:05:11.605 "zoned": false, 00:05:11.605 "supported_io_types": { 00:05:11.605 "read": true, 00:05:11.605 "write": true, 00:05:11.605 "unmap": true, 00:05:11.605 "flush": true, 00:05:11.605 "reset": true, 00:05:11.605 "nvme_admin": false, 00:05:11.605 "nvme_io": false, 00:05:11.605 "nvme_io_md": false, 00:05:11.605 "write_zeroes": true, 00:05:11.606 "zcopy": true, 00:05:11.606 "get_zone_info": false, 00:05:11.606 "zone_management": false, 00:05:11.606 "zone_append": false, 00:05:11.606 "compare": false, 00:05:11.606 "compare_and_write": false, 00:05:11.606 "abort": true, 00:05:11.606 "seek_hole": false, 00:05:11.606 "seek_data": false, 00:05:11.606 "copy": true, 00:05:11.606 "nvme_iov_md": false 00:05:11.606 }, 00:05:11.606 "memory_domains": [ 00:05:11.606 { 00:05:11.606 "dma_device_id": "system", 00:05:11.606 "dma_device_type": 1 00:05:11.606 }, 00:05:11.606 { 00:05:11.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.606 "dma_device_type": 2 00:05:11.606 } 00:05:11.606 ], 00:05:11.606 "driver_specific": {} 00:05:11.606 } 00:05:11.606 ]' 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:11.606 20:41:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:11.606 00:05:11.606 real 0m0.152s 00:05:11.606 user 0m0.097s 00:05:11.606 sys 0m0.020s 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.606 20:41:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.606 ************************************ 00:05:11.606 END TEST rpc_plugins 00:05:11.606 ************************************ 00:05:11.606 20:41:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.606 20:41:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:11.606 20:41:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.606 20:41:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.606 20:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.606 ************************************ 00:05:11.606 START TEST rpc_trace_cmd_test 00:05:11.606 ************************************ 00:05:11.866 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:11.866 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:11.866 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:11.866 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.866 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:11.867 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1363466", 00:05:11.867 "tpoint_group_mask": "0x8", 00:05:11.867 "iscsi_conn": { 00:05:11.867 "mask": "0x2", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "scsi": { 00:05:11.867 "mask": "0x4", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "bdev": { 00:05:11.867 "mask": "0x8", 00:05:11.867 "tpoint_mask": "0xffffffffffffffff" 00:05:11.867 }, 00:05:11.867 "nvmf_rdma": { 00:05:11.867 "mask": "0x10", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "nvmf_tcp": { 00:05:11.867 "mask": "0x20", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "ftl": { 00:05:11.867 "mask": "0x40", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "blobfs": { 00:05:11.867 "mask": "0x80", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "dsa": { 00:05:11.867 "mask": "0x200", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "thread": { 00:05:11.867 "mask": "0x400", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "nvme_pcie": { 00:05:11.867 "mask": "0x800", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "iaa": { 00:05:11.867 "mask": "0x1000", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "nvme_tcp": { 00:05:11.867 "mask": "0x2000", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "bdev_nvme": { 00:05:11.867 "mask": "0x4000", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 }, 00:05:11.867 "sock": { 00:05:11.867 "mask": "0x8000", 00:05:11.867 "tpoint_mask": "0x0" 00:05:11.867 } 00:05:11.867 }' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.867 00:05:11.867 real 0m0.246s 00:05:11.867 user 0m0.209s 00:05:11.867 sys 0m0.030s 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.867 20:41:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.867 ************************************ 00:05:11.867 END TEST rpc_trace_cmd_test 00:05:11.867 ************************************ 00:05:12.128 20:41:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.128 20:41:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.128 20:41:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.128 20:41:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.128 20:41:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.128 20:41:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.128 20:41:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 ************************************ 00:05:12.128 START TEST rpc_daemon_integrity 00:05:12.128 ************************************ 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.128 { 00:05:12.128 "name": "Malloc2", 00:05:12.128 "aliases": [ 00:05:12.128 "e038ff41-416b-470e-8808-1daa8460085a" 00:05:12.128 ], 00:05:12.128 "product_name": "Malloc disk", 00:05:12.128 "block_size": 512, 00:05:12.128 "num_blocks": 16384, 00:05:12.128 "uuid": "e038ff41-416b-470e-8808-1daa8460085a", 00:05:12.128 "assigned_rate_limits": { 00:05:12.128 "rw_ios_per_sec": 0, 00:05:12.128 "rw_mbytes_per_sec": 0, 00:05:12.128 "r_mbytes_per_sec": 0, 00:05:12.128 "w_mbytes_per_sec": 0 00:05:12.128 }, 00:05:12.128 "claimed": false, 00:05:12.128 "zoned": false, 00:05:12.128 "supported_io_types": { 00:05:12.128 "read": true, 00:05:12.128 "write": true, 00:05:12.128 "unmap": true, 00:05:12.128 "flush": true, 00:05:12.128 "reset": true, 00:05:12.128 "nvme_admin": false, 00:05:12.128 "nvme_io": false, 00:05:12.128 "nvme_io_md": false, 00:05:12.128 "write_zeroes": true, 00:05:12.128 "zcopy": true, 00:05:12.128 "get_zone_info": false, 00:05:12.128 "zone_management": false, 00:05:12.128 "zone_append": false, 00:05:12.128 "compare": false, 00:05:12.128 "compare_and_write": false, 00:05:12.128 "abort": true, 00:05:12.128 "seek_hole": false, 00:05:12.128 "seek_data": false, 00:05:12.128 "copy": true, 00:05:12.128 "nvme_iov_md": false 00:05:12.128 }, 00:05:12.128 "memory_domains": [ 00:05:12.128 { 00:05:12.128 "dma_device_id": "system", 00:05:12.128 "dma_device_type": 1 00:05:12.128 }, 00:05:12.128 { 00:05:12.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.128 "dma_device_type": 2 00:05:12.128 } 00:05:12.128 ], 00:05:12.128 "driver_specific": {} 00:05:12.128 } 00:05:12.128 ]' 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 [2024-07-15 20:41:15.956807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.128 [2024-07-15 20:41:15.956835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.128 [2024-07-15 20:41:15.956846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf12a90 00:05:12.128 [2024-07-15 20:41:15.956852] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.128 [2024-07-15 20:41:15.958056] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.128 [2024-07-15 20:41:15.958078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.128 Passthru0 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.128 { 00:05:12.128 "name": "Malloc2", 00:05:12.128 "aliases": [ 00:05:12.128 "e038ff41-416b-470e-8808-1daa8460085a" 00:05:12.128 ], 00:05:12.128 "product_name": "Malloc disk", 00:05:12.128 "block_size": 512, 00:05:12.128 "num_blocks": 16384, 00:05:12.128 "uuid": "e038ff41-416b-470e-8808-1daa8460085a", 00:05:12.128 "assigned_rate_limits": { 00:05:12.128 "rw_ios_per_sec": 0, 00:05:12.128 "rw_mbytes_per_sec": 0, 00:05:12.128 "r_mbytes_per_sec": 0, 00:05:12.128 "w_mbytes_per_sec": 0 00:05:12.128 }, 00:05:12.128 "claimed": true, 00:05:12.128 "claim_type": "exclusive_write", 00:05:12.128 "zoned": false, 00:05:12.128 "supported_io_types": { 00:05:12.128 "read": true, 00:05:12.128 "write": true, 00:05:12.128 "unmap": true, 00:05:12.128 "flush": true, 00:05:12.128 "reset": true, 00:05:12.128 "nvme_admin": false, 00:05:12.128 "nvme_io": false, 00:05:12.128 "nvme_io_md": false, 00:05:12.128 "write_zeroes": true, 00:05:12.128 "zcopy": true, 00:05:12.128 "get_zone_info": false, 00:05:12.128 "zone_management": false, 00:05:12.128 "zone_append": false, 00:05:12.128 "compare": false, 00:05:12.128 "compare_and_write": false, 00:05:12.128 "abort": true, 00:05:12.128 "seek_hole": false, 00:05:12.128 "seek_data": false, 00:05:12.128 "copy": true, 00:05:12.128 "nvme_iov_md": false 00:05:12.128 }, 00:05:12.128 "memory_domains": [ 00:05:12.128 { 00:05:12.128 "dma_device_id": "system", 00:05:12.128 "dma_device_type": 1 00:05:12.128 }, 00:05:12.128 { 00:05:12.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.128 "dma_device_type": 2 00:05:12.128 } 00:05:12.128 ], 00:05:12.128 "driver_specific": {} 00:05:12.128 }, 00:05:12.128 { 00:05:12.128 "name": "Passthru0", 00:05:12.128 "aliases": [ 00:05:12.128 "3c0baafb-f4b9-56e8-9e81-9253c8a0c771" 00:05:12.128 ], 00:05:12.128 "product_name": "passthru", 00:05:12.128 "block_size": 512, 00:05:12.128 "num_blocks": 16384, 00:05:12.128 "uuid": "3c0baafb-f4b9-56e8-9e81-9253c8a0c771", 00:05:12.128 "assigned_rate_limits": { 00:05:12.128 "rw_ios_per_sec": 0, 00:05:12.128 "rw_mbytes_per_sec": 0, 00:05:12.128 "r_mbytes_per_sec": 0, 00:05:12.128 "w_mbytes_per_sec": 0 00:05:12.128 }, 00:05:12.128 "claimed": false, 00:05:12.128 "zoned": false, 00:05:12.128 "supported_io_types": { 00:05:12.128 "read": true, 00:05:12.128 "write": true, 00:05:12.128 "unmap": true, 00:05:12.128 "flush": true, 00:05:12.128 "reset": true, 00:05:12.128 "nvme_admin": false, 00:05:12.128 "nvme_io": false, 00:05:12.128 "nvme_io_md": false, 00:05:12.128 "write_zeroes": true, 00:05:12.128 "zcopy": true, 00:05:12.128 "get_zone_info": false, 00:05:12.128 "zone_management": false, 00:05:12.128 "zone_append": false, 00:05:12.128 "compare": false, 00:05:12.128 "compare_and_write": false, 00:05:12.128 "abort": true, 00:05:12.128 "seek_hole": false, 00:05:12.128 "seek_data": false, 00:05:12.128 "copy": true, 00:05:12.128 "nvme_iov_md": false 00:05:12.128 }, 00:05:12.128 "memory_domains": [ 00:05:12.128 { 00:05:12.128 "dma_device_id": "system", 00:05:12.128 "dma_device_type": 1 00:05:12.128 }, 00:05:12.128 { 00:05:12.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.128 "dma_device_type": 2 00:05:12.128 } 00:05:12.128 ], 00:05:12.128 "driver_specific": { 00:05:12.128 "passthru": { 00:05:12.128 "name": "Passthru0", 00:05:12.128 "base_bdev_name": "Malloc2" 00:05:12.128 } 00:05:12.128 } 00:05:12.128 } 00:05:12.128 ]' 00:05:12.128 20:41:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.389 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.390 00:05:12.390 real 0m0.295s 00:05:12.390 user 0m0.186s 00:05:12.390 sys 0m0.043s 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.390 20:41:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.390 ************************************ 00:05:12.390 END TEST rpc_daemon_integrity 00:05:12.390 ************************************ 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.390 20:41:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:12.390 20:41:16 rpc -- rpc/rpc.sh@84 -- # killprocess 1363466 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@948 -- # '[' -z 1363466 ']' 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@952 -- # kill -0 1363466 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1363466 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1363466' 00:05:12.390 killing process with pid 1363466 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@967 -- # kill 1363466 00:05:12.390 20:41:16 rpc -- common/autotest_common.sh@972 -- # wait 1363466 00:05:12.651 00:05:12.651 real 0m2.469s 00:05:12.651 user 0m3.278s 00:05:12.651 sys 0m0.672s 00:05:12.651 20:41:16 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.651 20:41:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.651 ************************************ 00:05:12.651 END TEST rpc 00:05:12.651 ************************************ 00:05:12.651 20:41:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.651 20:41:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.651 20:41:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.651 20:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.651 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:05:12.651 ************************************ 00:05:12.651 START TEST skip_rpc 00:05:12.651 ************************************ 00:05:12.651 20:41:16 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.911 * Looking for test storage... 00:05:12.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.912 20:41:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.912 20:41:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.912 20:41:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:12.912 20:41:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.912 20:41:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.912 20:41:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.912 ************************************ 00:05:12.912 START TEST skip_rpc 00:05:12.912 ************************************ 00:05:12.912 20:41:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:12.912 20:41:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1364042 00:05:12.912 20:41:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.912 20:41:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:12.912 20:41:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:12.912 [2024-07-15 20:41:16.687745] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:12.912 [2024-07-15 20:41:16.687810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364042 ] 00:05:12.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.912 [2024-07-15 20:41:16.751161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.172 [2024-07-15 20:41:16.825874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1364042 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1364042 ']' 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1364042 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1364042 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1364042' 00:05:18.462 killing process with pid 1364042 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1364042 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1364042 00:05:18.462 00:05:18.462 real 0m5.277s 00:05:18.462 user 0m5.064s 00:05:18.462 sys 0m0.245s 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.462 20:41:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.462 ************************************ 00:05:18.462 END TEST skip_rpc 00:05:18.462 ************************************ 00:05:18.462 20:41:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.462 20:41:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:18.462 20:41:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.462 20:41:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.462 20:41:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.462 ************************************ 00:05:18.462 START TEST skip_rpc_with_json 00:05:18.462 ************************************ 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1365265 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1365265 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1365265 ']' 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.462 20:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.462 [2024-07-15 20:41:22.039666] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:18.463 [2024-07-15 20:41:22.039721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365265 ] 00:05:18.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.463 [2024-07-15 20:41:22.101482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.463 [2024-07-15 20:41:22.170886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.033 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.033 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.034 [2024-07-15 20:41:22.802599] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.034 request: 00:05:19.034 { 00:05:19.034 "trtype": "tcp", 00:05:19.034 "method": "nvmf_get_transports", 00:05:19.034 "req_id": 1 00:05:19.034 } 00:05:19.034 Got JSON-RPC error response 00:05:19.034 response: 00:05:19.034 { 00:05:19.034 "code": -19, 00:05:19.034 "message": "No such device" 00:05:19.034 } 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.034 [2024-07-15 20:41:22.814724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.034 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.294 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.295 { 00:05:19.295 "subsystems": [ 00:05:19.295 { 00:05:19.295 "subsystem": "vfio_user_target", 00:05:19.295 "config": null 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "keyring", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "iobuf", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "iobuf_set_options", 00:05:19.295 "params": { 00:05:19.295 "small_pool_count": 8192, 00:05:19.295 "large_pool_count": 1024, 00:05:19.295 "small_bufsize": 8192, 00:05:19.295 "large_bufsize": 135168 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "sock", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "sock_set_default_impl", 00:05:19.295 "params": { 00:05:19.295 "impl_name": "posix" 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "sock_impl_set_options", 00:05:19.295 "params": { 00:05:19.295 "impl_name": "ssl", 00:05:19.295 "recv_buf_size": 4096, 00:05:19.295 "send_buf_size": 4096, 00:05:19.295 "enable_recv_pipe": true, 00:05:19.295 "enable_quickack": false, 00:05:19.295 "enable_placement_id": 0, 00:05:19.295 "enable_zerocopy_send_server": true, 00:05:19.295 "enable_zerocopy_send_client": false, 00:05:19.295 "zerocopy_threshold": 0, 00:05:19.295 "tls_version": 0, 00:05:19.295 "enable_ktls": false 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "sock_impl_set_options", 00:05:19.295 "params": { 00:05:19.295 "impl_name": "posix", 00:05:19.295 "recv_buf_size": 2097152, 00:05:19.295 "send_buf_size": 2097152, 00:05:19.295 "enable_recv_pipe": true, 00:05:19.295 "enable_quickack": false, 00:05:19.295 "enable_placement_id": 0, 00:05:19.295 "enable_zerocopy_send_server": true, 00:05:19.295 "enable_zerocopy_send_client": false, 00:05:19.295 "zerocopy_threshold": 0, 00:05:19.295 "tls_version": 0, 00:05:19.295 "enable_ktls": false 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "vmd", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "accel", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "accel_set_options", 00:05:19.295 "params": { 00:05:19.295 "small_cache_size": 128, 00:05:19.295 "large_cache_size": 16, 00:05:19.295 "task_count": 2048, 00:05:19.295 "sequence_count": 2048, 00:05:19.295 "buf_count": 2048 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "bdev", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "bdev_set_options", 00:05:19.295 "params": { 00:05:19.295 "bdev_io_pool_size": 65535, 00:05:19.295 "bdev_io_cache_size": 256, 00:05:19.295 "bdev_auto_examine": true, 00:05:19.295 "iobuf_small_cache_size": 128, 00:05:19.295 "iobuf_large_cache_size": 16 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "bdev_raid_set_options", 00:05:19.295 "params": { 00:05:19.295 "process_window_size_kb": 1024 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "bdev_iscsi_set_options", 00:05:19.295 "params": { 00:05:19.295 "timeout_sec": 30 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "bdev_nvme_set_options", 00:05:19.295 "params": { 00:05:19.295 "action_on_timeout": "none", 00:05:19.295 "timeout_us": 0, 00:05:19.295 "timeout_admin_us": 0, 00:05:19.295 "keep_alive_timeout_ms": 10000, 00:05:19.295 "arbitration_burst": 0, 00:05:19.295 "low_priority_weight": 0, 00:05:19.295 "medium_priority_weight": 0, 00:05:19.295 "high_priority_weight": 0, 00:05:19.295 "nvme_adminq_poll_period_us": 10000, 00:05:19.295 "nvme_ioq_poll_period_us": 0, 00:05:19.295 "io_queue_requests": 0, 00:05:19.295 "delay_cmd_submit": true, 00:05:19.295 "transport_retry_count": 4, 00:05:19.295 "bdev_retry_count": 3, 00:05:19.295 "transport_ack_timeout": 0, 00:05:19.295 "ctrlr_loss_timeout_sec": 0, 00:05:19.295 "reconnect_delay_sec": 0, 00:05:19.295 "fast_io_fail_timeout_sec": 0, 00:05:19.295 "disable_auto_failback": false, 00:05:19.295 "generate_uuids": false, 00:05:19.295 "transport_tos": 0, 00:05:19.295 "nvme_error_stat": false, 00:05:19.295 "rdma_srq_size": 0, 00:05:19.295 "io_path_stat": false, 00:05:19.295 "allow_accel_sequence": false, 00:05:19.295 "rdma_max_cq_size": 0, 00:05:19.295 "rdma_cm_event_timeout_ms": 0, 00:05:19.295 "dhchap_digests": [ 00:05:19.295 "sha256", 00:05:19.295 "sha384", 00:05:19.295 "sha512" 00:05:19.295 ], 00:05:19.295 "dhchap_dhgroups": [ 00:05:19.295 "null", 00:05:19.295 "ffdhe2048", 00:05:19.295 "ffdhe3072", 00:05:19.295 "ffdhe4096", 00:05:19.295 "ffdhe6144", 00:05:19.295 "ffdhe8192" 00:05:19.295 ] 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "bdev_nvme_set_hotplug", 00:05:19.295 "params": { 00:05:19.295 "period_us": 100000, 00:05:19.295 "enable": false 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "bdev_wait_for_examine" 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "scsi", 00:05:19.295 "config": null 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "scheduler", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "framework_set_scheduler", 00:05:19.295 "params": { 00:05:19.295 "name": "static" 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "vhost_scsi", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "vhost_blk", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "ublk", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "nbd", 00:05:19.295 "config": [] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "nvmf", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "nvmf_set_config", 00:05:19.295 "params": { 00:05:19.295 "discovery_filter": "match_any", 00:05:19.295 "admin_cmd_passthru": { 00:05:19.295 "identify_ctrlr": false 00:05:19.295 } 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "nvmf_set_max_subsystems", 00:05:19.295 "params": { 00:05:19.295 "max_subsystems": 1024 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "nvmf_set_crdt", 00:05:19.295 "params": { 00:05:19.295 "crdt1": 0, 00:05:19.295 "crdt2": 0, 00:05:19.295 "crdt3": 0 00:05:19.295 } 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "method": "nvmf_create_transport", 00:05:19.295 "params": { 00:05:19.295 "trtype": "TCP", 00:05:19.295 "max_queue_depth": 128, 00:05:19.295 "max_io_qpairs_per_ctrlr": 127, 00:05:19.295 "in_capsule_data_size": 4096, 00:05:19.295 "max_io_size": 131072, 00:05:19.295 "io_unit_size": 131072, 00:05:19.295 "max_aq_depth": 128, 00:05:19.295 "num_shared_buffers": 511, 00:05:19.295 "buf_cache_size": 4294967295, 00:05:19.295 "dif_insert_or_strip": false, 00:05:19.295 "zcopy": false, 00:05:19.295 "c2h_success": true, 00:05:19.295 "sock_priority": 0, 00:05:19.295 "abort_timeout_sec": 1, 00:05:19.295 "ack_timeout": 0, 00:05:19.295 "data_wr_pool_size": 0 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 }, 00:05:19.295 { 00:05:19.295 "subsystem": "iscsi", 00:05:19.295 "config": [ 00:05:19.295 { 00:05:19.295 "method": "iscsi_set_options", 00:05:19.295 "params": { 00:05:19.295 "node_base": "iqn.2016-06.io.spdk", 00:05:19.295 "max_sessions": 128, 00:05:19.295 "max_connections_per_session": 2, 00:05:19.295 "max_queue_depth": 64, 00:05:19.295 "default_time2wait": 2, 00:05:19.295 "default_time2retain": 20, 00:05:19.295 "first_burst_length": 8192, 00:05:19.295 "immediate_data": true, 00:05:19.295 "allow_duplicated_isid": false, 00:05:19.295 "error_recovery_level": 0, 00:05:19.295 "nop_timeout": 60, 00:05:19.295 "nop_in_interval": 30, 00:05:19.295 "disable_chap": false, 00:05:19.295 "require_chap": false, 00:05:19.295 "mutual_chap": false, 00:05:19.295 "chap_group": 0, 00:05:19.295 "max_large_datain_per_connection": 64, 00:05:19.295 "max_r2t_per_connection": 4, 00:05:19.295 "pdu_pool_size": 36864, 00:05:19.295 "immediate_data_pool_size": 16384, 00:05:19.295 "data_out_pool_size": 2048 00:05:19.295 } 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 } 00:05:19.295 ] 00:05:19.295 } 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1365265 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1365265 ']' 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1365265 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.295 20:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1365265 00:05:19.295 20:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.296 20:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.296 20:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1365265' 00:05:19.296 killing process with pid 1365265 00:05:19.296 20:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1365265 00:05:19.296 20:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1365265 00:05:19.555 20:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1365393 00:05:19.555 20:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.555 20:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1365393 ']' 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1365393' 00:05:24.850 killing process with pid 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1365393 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.850 00:05:24.850 real 0m6.539s 00:05:24.850 user 0m6.431s 00:05:24.850 sys 0m0.515s 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.850 ************************************ 00:05:24.850 END TEST skip_rpc_with_json 00:05:24.850 ************************************ 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.850 20:41:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.850 ************************************ 00:05:24.850 START TEST skip_rpc_with_delay 00:05:24.850 ************************************ 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.850 [2024-07-15 20:41:28.655941] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.850 [2024-07-15 20:41:28.656018] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.850 00:05:24.850 real 0m0.073s 00:05:24.850 user 0m0.051s 00:05:24.850 sys 0m0.022s 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.850 20:41:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.850 ************************************ 00:05:24.850 END TEST skip_rpc_with_delay 00:05:24.850 ************************************ 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.850 20:41:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.850 20:41:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.850 20:41:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.850 20:41:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.111 ************************************ 00:05:25.111 START TEST exit_on_failed_rpc_init 00:05:25.111 ************************************ 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1366705 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1366705 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1366705 ']' 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.111 20:41:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.111 [2024-07-15 20:41:28.816701] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:25.111 [2024-07-15 20:41:28.816764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366705 ] 00:05:25.111 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.111 [2024-07-15 20:41:28.878771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.111 [2024-07-15 20:41:28.942729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.054 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.054 [2024-07-15 20:41:29.640187] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:26.054 [2024-07-15 20:41:29.640240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366761 ] 00:05:26.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.055 [2024-07-15 20:41:29.716394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.055 [2024-07-15 20:41:29.780607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.055 [2024-07-15 20:41:29.780668] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:26.055 [2024-07-15 20:41:29.780678] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:26.055 [2024-07-15 20:41:29.780685] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1366705 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1366705 ']' 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1366705 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1366705 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1366705' 00:05:26.055 killing process with pid 1366705 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1366705 00:05:26.055 20:41:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1366705 00:05:26.354 00:05:26.354 real 0m1.361s 00:05:26.354 user 0m1.625s 00:05:26.354 sys 0m0.351s 00:05:26.354 20:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.354 20:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 ************************************ 00:05:26.354 END TEST exit_on_failed_rpc_init 00:05:26.354 ************************************ 00:05:26.354 20:41:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.354 20:41:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.354 00:05:26.354 real 0m13.664s 00:05:26.354 user 0m13.310s 00:05:26.354 sys 0m1.432s 00:05:26.354 20:41:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.354 20:41:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 ************************************ 00:05:26.354 END TEST skip_rpc 00:05:26.354 ************************************ 00:05:26.354 20:41:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.354 20:41:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.354 20:41:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.354 20:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.354 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:26.617 ************************************ 00:05:26.617 START TEST rpc_client 00:05:26.617 ************************************ 00:05:26.617 20:41:30 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.617 * Looking for test storage... 00:05:26.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:26.617 20:41:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:26.617 OK 00:05:26.617 20:41:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.617 00:05:26.617 real 0m0.130s 00:05:26.617 user 0m0.051s 00:05:26.617 sys 0m0.086s 00:05:26.617 20:41:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.617 20:41:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.617 ************************************ 00:05:26.617 END TEST rpc_client 00:05:26.617 ************************************ 00:05:26.617 20:41:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.617 20:41:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.617 20:41:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.617 20:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.617 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:26.617 ************************************ 00:05:26.617 START TEST json_config 00:05:26.617 ************************************ 00:05:26.617 20:41:30 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.879 20:41:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.879 20:41:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.879 20:41:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.879 20:41:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.879 20:41:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.879 20:41:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.879 20:41:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.879 20:41:30 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.879 20:41:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.879 20:41:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.879 20:41:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.879 20:41:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:26.880 INFO: JSON configuration test init 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 20:41:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.880 20:41:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.880 20:41:30 json_config -- json_config/common.sh@10 -- # shift 00:05:26.880 20:41:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.880 20:41:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.880 20:41:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.880 20:41:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.880 20:41:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.880 20:41:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1367201 00:05:26.880 20:41:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.880 Waiting for target to run... 00:05:26.880 20:41:30 json_config -- json_config/common.sh@25 -- # waitforlisten 1367201 /var/tmp/spdk_tgt.sock 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 1367201 ']' 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.880 20:41:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.880 20:41:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 [2024-07-15 20:41:30.619572] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:26.880 [2024-07-15 20:41:30.619638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367201 ] 00:05:26.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.140 [2024-07-15 20:41:30.889510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.140 [2024-07-15 20:41:30.943820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.711 20:41:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.711 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.711 20:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.711 20:41:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:27.711 20:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.283 20:41:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.283 20:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.283 20:41:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.283 20:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.283 20:41:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.283 20:41:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.283 20:41:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.283 20:41:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:28.283 20:41:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.283 20:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:28.544 20:41:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.544 20:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.544 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.544 MallocForNvmf0 00:05:28.544 20:41:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.544 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.804 MallocForNvmf1 00:05:28.804 20:41:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.804 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.804 [2024-07-15 20:41:32.619282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.804 20:41:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.804 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.065 20:41:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.065 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.065 20:41:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.065 20:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.325 20:41:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.325 20:41:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.585 [2024-07-15 20:41:33.229248] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.585 20:41:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.585 20:41:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.585 20:41:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:29.585 20:41:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.585 20:41:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.585 MallocBdevForConfigChangeCheck 00:05:29.585 20:41:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.585 20:41:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.846 20:41:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:29.846 20:41:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.106 20:41:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:30.106 INFO: shutting down applications... 00:05:30.106 20:41:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:30.106 20:41:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:30.106 20:41:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:30.106 20:41:33 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:30.366 Calling clear_iscsi_subsystem 00:05:30.366 Calling clear_nvmf_subsystem 00:05:30.366 Calling clear_nbd_subsystem 00:05:30.366 Calling clear_ublk_subsystem 00:05:30.366 Calling clear_vhost_blk_subsystem 00:05:30.366 Calling clear_vhost_scsi_subsystem 00:05:30.366 Calling clear_bdev_subsystem 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:30.366 20:41:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:30.937 20:41:34 json_config -- json_config/json_config.sh@345 -- # break 00:05:30.937 20:41:34 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:30.937 20:41:34 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:30.937 20:41:34 json_config -- json_config/common.sh@31 -- # local app=target 00:05:30.937 20:41:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.937 20:41:34 json_config -- json_config/common.sh@35 -- # [[ -n 1367201 ]] 00:05:30.937 20:41:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1367201 00:05:30.937 20:41:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.937 20:41:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.937 20:41:34 json_config -- json_config/common.sh@41 -- # kill -0 1367201 00:05:30.937 20:41:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.198 20:41:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.198 20:41:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.198 20:41:35 json_config -- json_config/common.sh@41 -- # kill -0 1367201 00:05:31.198 20:41:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:31.198 20:41:35 json_config -- json_config/common.sh@43 -- # break 00:05:31.198 20:41:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:31.198 20:41:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:31.198 SPDK target shutdown done 00:05:31.198 20:41:35 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:31.198 INFO: relaunching applications... 00:05:31.198 20:41:35 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.198 20:41:35 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.198 20:41:35 json_config -- json_config/common.sh@10 -- # shift 00:05:31.198 20:41:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.198 20:41:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.198 20:41:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.198 20:41:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.198 20:41:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.198 20:41:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1368037 00:05:31.198 20:41:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.198 Waiting for target to run... 00:05:31.198 20:41:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1368037 /var/tmp/spdk_tgt.sock 00:05:31.198 20:41:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 1368037 ']' 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.198 20:41:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.459 [2024-07-15 20:41:35.098099] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:31.459 [2024-07-15 20:41:35.098176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368037 ] 00:05:31.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.720 [2024-07-15 20:41:35.365883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.720 [2024-07-15 20:41:35.419427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.290 [2024-07-15 20:41:35.913268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.290 [2024-07-15 20:41:35.945636] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.290 20:41:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.290 20:41:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.290 20:41:35 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.290 00:05:32.290 20:41:35 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:32.290 20:41:35 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.290 INFO: Checking if target configuration is the same... 00:05:32.290 20:41:35 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.290 20:41:35 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:32.290 20:41:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.290 + '[' 2 -ne 2 ']' 00:05:32.290 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.290 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.290 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.290 +++ basename /dev/fd/62 00:05:32.290 ++ mktemp /tmp/62.XXX 00:05:32.290 + tmp_file_1=/tmp/62.sYd 00:05:32.290 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.290 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.290 + tmp_file_2=/tmp/spdk_tgt_config.json.5r1 00:05:32.290 + ret=0 00:05:32.290 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.551 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.551 + diff -u /tmp/62.sYd /tmp/spdk_tgt_config.json.5r1 00:05:32.551 + echo 'INFO: JSON config files are the same' 00:05:32.551 INFO: JSON config files are the same 00:05:32.551 + rm /tmp/62.sYd /tmp/spdk_tgt_config.json.5r1 00:05:32.551 + exit 0 00:05:32.551 20:41:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:32.551 20:41:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.551 INFO: changing configuration and checking if this can be detected... 00:05:32.551 20:41:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.551 20:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.812 20:41:36 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:32.812 20:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.812 20:41:36 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.812 + '[' 2 -ne 2 ']' 00:05:32.812 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.812 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.812 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.812 +++ basename /dev/fd/62 00:05:32.812 ++ mktemp /tmp/62.XXX 00:05:32.812 + tmp_file_1=/tmp/62.M7x 00:05:32.812 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.812 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.812 + tmp_file_2=/tmp/spdk_tgt_config.json.kN6 00:05:32.812 + ret=0 00:05:32.812 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.073 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.073 + diff -u /tmp/62.M7x /tmp/spdk_tgt_config.json.kN6 00:05:33.073 + ret=1 00:05:33.073 + echo '=== Start of file: /tmp/62.M7x ===' 00:05:33.073 + cat /tmp/62.M7x 00:05:33.073 + echo '=== End of file: /tmp/62.M7x ===' 00:05:33.073 + echo '' 00:05:33.073 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kN6 ===' 00:05:33.073 + cat /tmp/spdk_tgt_config.json.kN6 00:05:33.073 + echo '=== End of file: /tmp/spdk_tgt_config.json.kN6 ===' 00:05:33.073 + echo '' 00:05:33.073 + rm /tmp/62.M7x /tmp/spdk_tgt_config.json.kN6 00:05:33.073 + exit 1 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:33.073 INFO: configuration change detected. 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@317 -- # [[ -n 1368037 ]] 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.073 20:41:36 json_config -- json_config/json_config.sh@323 -- # killprocess 1368037 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 1368037 ']' 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@952 -- # kill -0 1368037 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@953 -- # uname 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.073 20:41:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1368037 00:05:33.334 20:41:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.334 20:41:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.334 20:41:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1368037' 00:05:33.334 killing process with pid 1368037 00:05:33.334 20:41:36 json_config -- common/autotest_common.sh@967 -- # kill 1368037 00:05:33.334 20:41:36 json_config -- common/autotest_common.sh@972 -- # wait 1368037 00:05:33.595 20:41:37 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.595 20:41:37 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:33.595 20:41:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.595 20:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.595 20:41:37 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:33.595 20:41:37 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:33.595 INFO: Success 00:05:33.595 00:05:33.595 real 0m6.877s 00:05:33.595 user 0m8.291s 00:05:33.595 sys 0m1.683s 00:05:33.595 20:41:37 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.595 20:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.595 ************************************ 00:05:33.595 END TEST json_config 00:05:33.595 ************************************ 00:05:33.595 20:41:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.595 20:41:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.595 20:41:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.595 20:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.595 20:41:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.595 ************************************ 00:05:33.595 START TEST json_config_extra_key 00:05:33.595 ************************************ 00:05:33.595 20:41:37 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.595 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.595 20:41:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.596 20:41:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.596 20:41:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.596 20:41:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.596 20:41:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.596 20:41:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.596 20:41:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.596 20:41:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:33.596 20:41:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.596 20:41:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.596 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:33.857 INFO: launching applications... 00:05:33.857 20:41:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1368788 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.857 Waiting for target to run... 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1368788 /var/tmp/spdk_tgt.sock 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1368788 ']' 00:05:33.857 20:41:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.857 20:41:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.857 [2024-07-15 20:41:37.546847] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:33.857 [2024-07-15 20:41:37.546916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368788 ] 00:05:33.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.117 [2024-07-15 20:41:37.809394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.117 [2024-07-15 20:41:37.862270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.687 20:41:38 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.687 20:41:38 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:34.687 00:05:34.687 20:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.687 INFO: shutting down applications... 00:05:34.687 20:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1368788 ]] 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1368788 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1368788 00:05:34.687 20:41:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1368788 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.947 20:41:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.947 SPDK target shutdown done 00:05:34.947 20:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.947 Success 00:05:34.947 00:05:34.947 real 0m1.445s 00:05:34.947 user 0m1.118s 00:05:34.947 sys 0m0.364s 00:05:34.947 20:41:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.947 20:41:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 ************************************ 00:05:34.947 END TEST json_config_extra_key 00:05:34.947 ************************************ 00:05:35.207 20:41:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.207 20:41:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.207 20:41:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.207 20:41:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.207 20:41:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 ************************************ 00:05:35.207 START TEST alias_rpc 00:05:35.207 ************************************ 00:05:35.207 20:41:38 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.207 * Looking for test storage... 00:05:35.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.207 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.207 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1369167 00:05:35.207 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1369167 00:05:35.207 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1369167 ']' 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.207 20:41:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 [2024-07-15 20:41:39.061476] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:35.207 [2024-07-15 20:41:39.061530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369167 ] 00:05:35.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.467 [2024-07-15 20:41:39.121205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.467 [2024-07-15 20:41:39.188899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.035 20:41:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.035 20:41:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.035 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.295 20:41:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1369167 00:05:36.295 20:41:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1369167 ']' 00:05:36.295 20:41:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1369167 00:05:36.295 20:41:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.295 20:41:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.295 20:41:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369167 00:05:36.295 20:41:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.295 20:41:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.295 20:41:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369167' 00:05:36.295 killing process with pid 1369167 00:05:36.295 20:41:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 1369167 00:05:36.295 20:41:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 1369167 00:05:36.560 00:05:36.560 real 0m1.317s 00:05:36.560 user 0m1.423s 00:05:36.560 sys 0m0.346s 00:05:36.560 20:41:40 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.560 20:41:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.560 ************************************ 00:05:36.560 END TEST alias_rpc 00:05:36.560 ************************************ 00:05:36.560 20:41:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.560 20:41:40 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:36.560 20:41:40 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:36.560 20:41:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.560 20:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.560 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:05:36.560 ************************************ 00:05:36.560 START TEST spdkcli_tcp 00:05:36.560 ************************************ 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:36.560 * Looking for test storage... 00:05:36.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1369431 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1369431 00:05:36.560 20:41:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1369431 ']' 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.560 20:41:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.821 [2024-07-15 20:41:40.465306] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:36.821 [2024-07-15 20:41:40.465379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369431 ] 00:05:36.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.821 [2024-07-15 20:41:40.532045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.821 [2024-07-15 20:41:40.607605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.821 [2024-07-15 20:41:40.607607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.391 20:41:41 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.391 20:41:41 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:37.391 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1369578 00:05:37.391 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:37.391 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:37.651 [ 00:05:37.651 "bdev_malloc_delete", 00:05:37.651 "bdev_malloc_create", 00:05:37.651 "bdev_null_resize", 00:05:37.651 "bdev_null_delete", 00:05:37.651 "bdev_null_create", 00:05:37.651 "bdev_nvme_cuse_unregister", 00:05:37.651 "bdev_nvme_cuse_register", 00:05:37.651 "bdev_opal_new_user", 00:05:37.651 "bdev_opal_set_lock_state", 00:05:37.651 "bdev_opal_delete", 00:05:37.651 "bdev_opal_get_info", 00:05:37.651 "bdev_opal_create", 00:05:37.651 "bdev_nvme_opal_revert", 00:05:37.651 "bdev_nvme_opal_init", 00:05:37.651 "bdev_nvme_send_cmd", 00:05:37.651 "bdev_nvme_get_path_iostat", 00:05:37.651 "bdev_nvme_get_mdns_discovery_info", 00:05:37.651 "bdev_nvme_stop_mdns_discovery", 00:05:37.651 "bdev_nvme_start_mdns_discovery", 00:05:37.651 "bdev_nvme_set_multipath_policy", 00:05:37.651 "bdev_nvme_set_preferred_path", 00:05:37.651 "bdev_nvme_get_io_paths", 00:05:37.651 "bdev_nvme_remove_error_injection", 00:05:37.651 "bdev_nvme_add_error_injection", 00:05:37.651 "bdev_nvme_get_discovery_info", 00:05:37.651 "bdev_nvme_stop_discovery", 00:05:37.651 "bdev_nvme_start_discovery", 00:05:37.651 "bdev_nvme_get_controller_health_info", 00:05:37.651 "bdev_nvme_disable_controller", 00:05:37.651 "bdev_nvme_enable_controller", 00:05:37.651 "bdev_nvme_reset_controller", 00:05:37.651 "bdev_nvme_get_transport_statistics", 00:05:37.651 "bdev_nvme_apply_firmware", 00:05:37.651 "bdev_nvme_detach_controller", 00:05:37.651 "bdev_nvme_get_controllers", 00:05:37.651 "bdev_nvme_attach_controller", 00:05:37.651 "bdev_nvme_set_hotplug", 00:05:37.651 "bdev_nvme_set_options", 00:05:37.651 "bdev_passthru_delete", 00:05:37.651 "bdev_passthru_create", 00:05:37.651 "bdev_lvol_set_parent_bdev", 00:05:37.651 "bdev_lvol_set_parent", 00:05:37.651 "bdev_lvol_check_shallow_copy", 00:05:37.651 "bdev_lvol_start_shallow_copy", 00:05:37.651 "bdev_lvol_grow_lvstore", 00:05:37.651 "bdev_lvol_get_lvols", 00:05:37.651 "bdev_lvol_get_lvstores", 00:05:37.651 "bdev_lvol_delete", 00:05:37.651 "bdev_lvol_set_read_only", 00:05:37.651 "bdev_lvol_resize", 00:05:37.651 "bdev_lvol_decouple_parent", 00:05:37.651 "bdev_lvol_inflate", 00:05:37.651 "bdev_lvol_rename", 00:05:37.651 "bdev_lvol_clone_bdev", 00:05:37.651 "bdev_lvol_clone", 00:05:37.651 "bdev_lvol_snapshot", 00:05:37.651 "bdev_lvol_create", 00:05:37.651 "bdev_lvol_delete_lvstore", 00:05:37.651 "bdev_lvol_rename_lvstore", 00:05:37.651 "bdev_lvol_create_lvstore", 00:05:37.651 "bdev_raid_set_options", 00:05:37.651 "bdev_raid_remove_base_bdev", 00:05:37.651 "bdev_raid_add_base_bdev", 00:05:37.651 "bdev_raid_delete", 00:05:37.651 "bdev_raid_create", 00:05:37.651 "bdev_raid_get_bdevs", 00:05:37.651 "bdev_error_inject_error", 00:05:37.651 "bdev_error_delete", 00:05:37.651 "bdev_error_create", 00:05:37.651 "bdev_split_delete", 00:05:37.651 "bdev_split_create", 00:05:37.651 "bdev_delay_delete", 00:05:37.651 "bdev_delay_create", 00:05:37.651 "bdev_delay_update_latency", 00:05:37.651 "bdev_zone_block_delete", 00:05:37.651 "bdev_zone_block_create", 00:05:37.651 "blobfs_create", 00:05:37.651 "blobfs_detect", 00:05:37.651 "blobfs_set_cache_size", 00:05:37.651 "bdev_aio_delete", 00:05:37.651 "bdev_aio_rescan", 00:05:37.651 "bdev_aio_create", 00:05:37.651 "bdev_ftl_set_property", 00:05:37.651 "bdev_ftl_get_properties", 00:05:37.651 "bdev_ftl_get_stats", 00:05:37.651 "bdev_ftl_unmap", 00:05:37.651 "bdev_ftl_unload", 00:05:37.651 "bdev_ftl_delete", 00:05:37.651 "bdev_ftl_load", 00:05:37.651 "bdev_ftl_create", 00:05:37.651 "bdev_virtio_attach_controller", 00:05:37.651 "bdev_virtio_scsi_get_devices", 00:05:37.651 "bdev_virtio_detach_controller", 00:05:37.651 "bdev_virtio_blk_set_hotplug", 00:05:37.651 "bdev_iscsi_delete", 00:05:37.651 "bdev_iscsi_create", 00:05:37.651 "bdev_iscsi_set_options", 00:05:37.651 "accel_error_inject_error", 00:05:37.651 "ioat_scan_accel_module", 00:05:37.651 "dsa_scan_accel_module", 00:05:37.651 "iaa_scan_accel_module", 00:05:37.651 "vfu_virtio_create_scsi_endpoint", 00:05:37.651 "vfu_virtio_scsi_remove_target", 00:05:37.651 "vfu_virtio_scsi_add_target", 00:05:37.652 "vfu_virtio_create_blk_endpoint", 00:05:37.652 "vfu_virtio_delete_endpoint", 00:05:37.652 "keyring_file_remove_key", 00:05:37.652 "keyring_file_add_key", 00:05:37.652 "keyring_linux_set_options", 00:05:37.652 "iscsi_get_histogram", 00:05:37.652 "iscsi_enable_histogram", 00:05:37.652 "iscsi_set_options", 00:05:37.652 "iscsi_get_auth_groups", 00:05:37.652 "iscsi_auth_group_remove_secret", 00:05:37.652 "iscsi_auth_group_add_secret", 00:05:37.652 "iscsi_delete_auth_group", 00:05:37.652 "iscsi_create_auth_group", 00:05:37.652 "iscsi_set_discovery_auth", 00:05:37.652 "iscsi_get_options", 00:05:37.652 "iscsi_target_node_request_logout", 00:05:37.652 "iscsi_target_node_set_redirect", 00:05:37.652 "iscsi_target_node_set_auth", 00:05:37.652 "iscsi_target_node_add_lun", 00:05:37.652 "iscsi_get_stats", 00:05:37.652 "iscsi_get_connections", 00:05:37.652 "iscsi_portal_group_set_auth", 00:05:37.652 "iscsi_start_portal_group", 00:05:37.652 "iscsi_delete_portal_group", 00:05:37.652 "iscsi_create_portal_group", 00:05:37.652 "iscsi_get_portal_groups", 00:05:37.652 "iscsi_delete_target_node", 00:05:37.652 "iscsi_target_node_remove_pg_ig_maps", 00:05:37.652 "iscsi_target_node_add_pg_ig_maps", 00:05:37.652 "iscsi_create_target_node", 00:05:37.652 "iscsi_get_target_nodes", 00:05:37.652 "iscsi_delete_initiator_group", 00:05:37.652 "iscsi_initiator_group_remove_initiators", 00:05:37.652 "iscsi_initiator_group_add_initiators", 00:05:37.652 "iscsi_create_initiator_group", 00:05:37.652 "iscsi_get_initiator_groups", 00:05:37.652 "nvmf_set_crdt", 00:05:37.652 "nvmf_set_config", 00:05:37.652 "nvmf_set_max_subsystems", 00:05:37.652 "nvmf_stop_mdns_prr", 00:05:37.652 "nvmf_publish_mdns_prr", 00:05:37.652 "nvmf_subsystem_get_listeners", 00:05:37.652 "nvmf_subsystem_get_qpairs", 00:05:37.652 "nvmf_subsystem_get_controllers", 00:05:37.652 "nvmf_get_stats", 00:05:37.652 "nvmf_get_transports", 00:05:37.652 "nvmf_create_transport", 00:05:37.652 "nvmf_get_targets", 00:05:37.652 "nvmf_delete_target", 00:05:37.652 "nvmf_create_target", 00:05:37.652 "nvmf_subsystem_allow_any_host", 00:05:37.652 "nvmf_subsystem_remove_host", 00:05:37.652 "nvmf_subsystem_add_host", 00:05:37.652 "nvmf_ns_remove_host", 00:05:37.652 "nvmf_ns_add_host", 00:05:37.652 "nvmf_subsystem_remove_ns", 00:05:37.652 "nvmf_subsystem_add_ns", 00:05:37.652 "nvmf_subsystem_listener_set_ana_state", 00:05:37.652 "nvmf_discovery_get_referrals", 00:05:37.652 "nvmf_discovery_remove_referral", 00:05:37.652 "nvmf_discovery_add_referral", 00:05:37.652 "nvmf_subsystem_remove_listener", 00:05:37.652 "nvmf_subsystem_add_listener", 00:05:37.652 "nvmf_delete_subsystem", 00:05:37.652 "nvmf_create_subsystem", 00:05:37.652 "nvmf_get_subsystems", 00:05:37.652 "env_dpdk_get_mem_stats", 00:05:37.652 "nbd_get_disks", 00:05:37.652 "nbd_stop_disk", 00:05:37.652 "nbd_start_disk", 00:05:37.652 "ublk_recover_disk", 00:05:37.652 "ublk_get_disks", 00:05:37.652 "ublk_stop_disk", 00:05:37.652 "ublk_start_disk", 00:05:37.652 "ublk_destroy_target", 00:05:37.652 "ublk_create_target", 00:05:37.652 "virtio_blk_create_transport", 00:05:37.652 "virtio_blk_get_transports", 00:05:37.652 "vhost_controller_set_coalescing", 00:05:37.652 "vhost_get_controllers", 00:05:37.652 "vhost_delete_controller", 00:05:37.652 "vhost_create_blk_controller", 00:05:37.652 "vhost_scsi_controller_remove_target", 00:05:37.652 "vhost_scsi_controller_add_target", 00:05:37.652 "vhost_start_scsi_controller", 00:05:37.652 "vhost_create_scsi_controller", 00:05:37.652 "thread_set_cpumask", 00:05:37.652 "framework_get_governor", 00:05:37.652 "framework_get_scheduler", 00:05:37.652 "framework_set_scheduler", 00:05:37.652 "framework_get_reactors", 00:05:37.652 "thread_get_io_channels", 00:05:37.652 "thread_get_pollers", 00:05:37.652 "thread_get_stats", 00:05:37.652 "framework_monitor_context_switch", 00:05:37.652 "spdk_kill_instance", 00:05:37.652 "log_enable_timestamps", 00:05:37.652 "log_get_flags", 00:05:37.652 "log_clear_flag", 00:05:37.652 "log_set_flag", 00:05:37.652 "log_get_level", 00:05:37.652 "log_set_level", 00:05:37.652 "log_get_print_level", 00:05:37.652 "log_set_print_level", 00:05:37.652 "framework_enable_cpumask_locks", 00:05:37.652 "framework_disable_cpumask_locks", 00:05:37.652 "framework_wait_init", 00:05:37.652 "framework_start_init", 00:05:37.652 "scsi_get_devices", 00:05:37.652 "bdev_get_histogram", 00:05:37.652 "bdev_enable_histogram", 00:05:37.652 "bdev_set_qos_limit", 00:05:37.652 "bdev_set_qd_sampling_period", 00:05:37.652 "bdev_get_bdevs", 00:05:37.652 "bdev_reset_iostat", 00:05:37.652 "bdev_get_iostat", 00:05:37.652 "bdev_examine", 00:05:37.652 "bdev_wait_for_examine", 00:05:37.652 "bdev_set_options", 00:05:37.652 "notify_get_notifications", 00:05:37.652 "notify_get_types", 00:05:37.652 "accel_get_stats", 00:05:37.652 "accel_set_options", 00:05:37.652 "accel_set_driver", 00:05:37.652 "accel_crypto_key_destroy", 00:05:37.652 "accel_crypto_keys_get", 00:05:37.652 "accel_crypto_key_create", 00:05:37.652 "accel_assign_opc", 00:05:37.652 "accel_get_module_info", 00:05:37.652 "accel_get_opc_assignments", 00:05:37.652 "vmd_rescan", 00:05:37.652 "vmd_remove_device", 00:05:37.652 "vmd_enable", 00:05:37.652 "sock_get_default_impl", 00:05:37.652 "sock_set_default_impl", 00:05:37.652 "sock_impl_set_options", 00:05:37.652 "sock_impl_get_options", 00:05:37.652 "iobuf_get_stats", 00:05:37.652 "iobuf_set_options", 00:05:37.652 "keyring_get_keys", 00:05:37.652 "framework_get_pci_devices", 00:05:37.652 "framework_get_config", 00:05:37.652 "framework_get_subsystems", 00:05:37.652 "vfu_tgt_set_base_path", 00:05:37.652 "trace_get_info", 00:05:37.652 "trace_get_tpoint_group_mask", 00:05:37.652 "trace_disable_tpoint_group", 00:05:37.652 "trace_enable_tpoint_group", 00:05:37.652 "trace_clear_tpoint_mask", 00:05:37.652 "trace_set_tpoint_mask", 00:05:37.652 "spdk_get_version", 00:05:37.652 "rpc_get_methods" 00:05:37.652 ] 00:05:37.652 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:37.652 20:41:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1369431 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1369431 ']' 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1369431 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369431 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369431' 00:05:37.652 killing process with pid 1369431 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1369431 00:05:37.652 20:41:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1369431 00:05:37.912 00:05:37.912 real 0m1.406s 00:05:37.912 user 0m2.571s 00:05:37.912 sys 0m0.423s 00:05:37.912 20:41:41 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.912 20:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.912 ************************************ 00:05:37.912 END TEST spdkcli_tcp 00:05:37.912 ************************************ 00:05:37.912 20:41:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.912 20:41:41 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.912 20:41:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.912 20:41:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.912 20:41:41 -- common/autotest_common.sh@10 -- # set +x 00:05:37.912 ************************************ 00:05:37.912 START TEST dpdk_mem_utility 00:05:37.912 ************************************ 00:05:37.912 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.173 * Looking for test storage... 00:05:38.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.173 20:41:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.173 20:41:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1369720 00:05:38.173 20:41:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1369720 00:05:38.174 20:41:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1369720 ']' 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.174 20:41:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.174 [2024-07-15 20:41:41.926370] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:38.174 [2024-07-15 20:41:41.926437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369720 ] 00:05:38.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.174 [2024-07-15 20:41:41.990051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.174 [2024-07-15 20:41:42.065256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.115 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.115 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:39.115 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.115 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.115 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.115 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.115 { 00:05:39.115 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.115 } 00:05:39.115 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.115 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.115 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:39.115 1 heaps totaling size 814.000000 MiB 00:05:39.115 size: 814.000000 MiB heap id: 0 00:05:39.115 end heaps---------- 00:05:39.115 8 mempools totaling size 598.116089 MiB 00:05:39.115 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.115 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.115 size: 84.521057 MiB name: bdev_io_1369720 00:05:39.115 size: 51.011292 MiB name: evtpool_1369720 00:05:39.115 size: 50.003479 MiB name: msgpool_1369720 00:05:39.115 size: 21.763794 MiB name: PDU_Pool 00:05:39.115 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.115 size: 0.026123 MiB name: Session_Pool 00:05:39.115 end mempools------- 00:05:39.115 6 memzones totaling size 4.142822 MiB 00:05:39.115 size: 1.000366 MiB name: RG_ring_0_1369720 00:05:39.115 size: 1.000366 MiB name: RG_ring_1_1369720 00:05:39.115 size: 1.000366 MiB name: RG_ring_4_1369720 00:05:39.115 size: 1.000366 MiB name: RG_ring_5_1369720 00:05:39.115 size: 0.125366 MiB name: RG_ring_2_1369720 00:05:39.115 size: 0.015991 MiB name: RG_ring_3_1369720 00:05:39.115 end memzones------- 00:05:39.115 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.115 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:39.115 list of free elements. size: 12.519348 MiB 00:05:39.115 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:39.115 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:39.115 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:39.115 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:39.115 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:39.115 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:39.115 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:39.115 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:39.115 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:39.115 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:39.115 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:39.115 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:39.115 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:39.115 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:39.115 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:39.115 list of standard malloc elements. size: 199.218079 MiB 00:05:39.115 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:39.115 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:39.115 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:39.115 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:39.115 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:39.115 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:39.115 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:39.115 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:39.115 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:39.115 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:39.115 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:39.115 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:39.115 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:39.116 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:39.116 list of memzone associated elements. size: 602.262573 MiB 00:05:39.116 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:39.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.116 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:39.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.116 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:39.116 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1369720_0 00:05:39.116 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:39.116 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1369720_0 00:05:39.116 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:39.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1369720_0 00:05:39.116 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:39.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.116 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:39.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.116 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:39.116 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1369720 00:05:39.116 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:39.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1369720 00:05:39.116 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:39.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1369720 00:05:39.116 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:39.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.116 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:39.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.116 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:39.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.116 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:39.116 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.116 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:39.116 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1369720 00:05:39.116 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:39.116 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1369720 00:05:39.116 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:39.116 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1369720 00:05:39.116 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:39.116 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1369720 00:05:39.116 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:39.116 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1369720 00:05:39.116 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:39.116 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.116 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:39.116 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.116 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:39.116 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.116 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:39.116 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1369720 00:05:39.116 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:39.116 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.116 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:39.116 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.116 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:39.116 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1369720 00:05:39.116 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:39.116 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.116 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:39.116 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1369720 00:05:39.116 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:39.116 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1369720 00:05:39.116 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:39.116 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.116 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.116 20:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1369720 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1369720 ']' 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1369720 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369720 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369720' 00:05:39.116 killing process with pid 1369720 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1369720 00:05:39.116 20:41:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1369720 00:05:39.377 00:05:39.377 real 0m1.282s 00:05:39.377 user 0m1.376s 00:05:39.377 sys 0m0.341s 00:05:39.377 20:41:43 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.377 20:41:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.377 ************************************ 00:05:39.377 END TEST dpdk_mem_utility 00:05:39.377 ************************************ 00:05:39.377 20:41:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.377 20:41:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.377 20:41:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.377 20:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.377 20:41:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.377 ************************************ 00:05:39.377 START TEST event 00:05:39.377 ************************************ 00:05:39.377 20:41:43 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.377 * Looking for test storage... 00:05:39.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.377 20:41:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.377 20:41:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.377 20:41:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.377 20:41:43 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.377 20:41:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.377 20:41:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.377 ************************************ 00:05:39.377 START TEST event_perf 00:05:39.377 ************************************ 00:05:39.377 20:41:43 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.638 Running I/O for 1 seconds...[2024-07-15 20:41:43.281460] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:39.638 [2024-07-15 20:41:43.281560] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370038 ] 00:05:39.638 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.638 [2024-07-15 20:41:43.345785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.638 [2024-07-15 20:41:43.415157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.638 [2024-07-15 20:41:43.415374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.638 [2024-07-15 20:41:43.415375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.638 Running I/O for 1 seconds...[2024-07-15 20:41:43.415223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.578 00:05:40.578 lcore 0: 176655 00:05:40.578 lcore 1: 176656 00:05:40.578 lcore 2: 176654 00:05:40.578 lcore 3: 176657 00:05:40.578 done. 00:05:40.578 00:05:40.578 real 0m1.208s 00:05:40.578 user 0m4.135s 00:05:40.578 sys 0m0.071s 00:05:40.578 20:41:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.578 20:41:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.578 ************************************ 00:05:40.578 END TEST event_perf 00:05:40.578 ************************************ 00:05:40.839 20:41:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.839 20:41:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.839 20:41:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:40.839 20:41:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.839 20:41:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.839 ************************************ 00:05:40.839 START TEST event_reactor 00:05:40.839 ************************************ 00:05:40.839 20:41:44 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:40.839 [2024-07-15 20:41:44.569307] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:40.839 [2024-07-15 20:41:44.569390] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370392 ] 00:05:40.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.839 [2024-07-15 20:41:44.631676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.839 [2024-07-15 20:41:44.696830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.223 test_start 00:05:42.223 oneshot 00:05:42.223 tick 100 00:05:42.223 tick 100 00:05:42.223 tick 250 00:05:42.223 tick 100 00:05:42.223 tick 100 00:05:42.223 tick 250 00:05:42.223 tick 100 00:05:42.223 tick 500 00:05:42.223 tick 100 00:05:42.223 tick 100 00:05:42.223 tick 250 00:05:42.223 tick 100 00:05:42.223 tick 100 00:05:42.223 test_end 00:05:42.223 00:05:42.223 real 0m1.202s 00:05:42.223 user 0m1.129s 00:05:42.223 sys 0m0.069s 00:05:42.223 20:41:45 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.223 20:41:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.223 ************************************ 00:05:42.223 END TEST event_reactor 00:05:42.223 ************************************ 00:05:42.223 20:41:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.223 20:41:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.223 20:41:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:42.223 20:41:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.223 20:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.223 ************************************ 00:05:42.223 START TEST event_reactor_perf 00:05:42.223 ************************************ 00:05:42.223 20:41:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.223 [2024-07-15 20:41:45.849402] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:42.223 [2024-07-15 20:41:45.849495] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370740 ] 00:05:42.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.223 [2024-07-15 20:41:45.911767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.223 [2024-07-15 20:41:45.976804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.162 test_start 00:05:43.162 test_end 00:05:43.162 Performance: 369347 events per second 00:05:43.162 00:05:43.162 real 0m1.203s 00:05:43.162 user 0m1.128s 00:05:43.162 sys 0m0.071s 00:05:43.162 20:41:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.162 20:41:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.162 ************************************ 00:05:43.162 END TEST event_reactor_perf 00:05:43.162 ************************************ 00:05:43.424 20:41:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.424 20:41:47 event -- event/event.sh@49 -- # uname -s 00:05:43.424 20:41:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.424 20:41:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.424 20:41:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.424 20:41:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.424 20:41:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.424 ************************************ 00:05:43.424 START TEST event_scheduler 00:05:43.424 ************************************ 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:43.424 * Looking for test storage... 00:05:43.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:43.424 20:41:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.424 20:41:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1371055 00:05:43.424 20:41:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.424 20:41:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.424 20:41:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1371055 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1371055 ']' 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.424 20:41:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.424 [2024-07-15 20:41:47.264391] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:43.424 [2024-07-15 20:41:47.264463] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1371055 ] 00:05:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.685 [2024-07-15 20:41:47.320095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.685 [2024-07-15 20:41:47.387568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.685 [2024-07-15 20:41:47.387729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.685 [2024-07-15 20:41:47.387886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.685 [2024-07-15 20:41:47.387887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:44.256 20:41:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.256 [2024-07-15 20:41:48.057967] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:44.256 [2024-07-15 20:41:48.057981] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.256 [2024-07-15 20:41:48.057988] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.256 [2024-07-15 20:41:48.057992] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.256 [2024-07-15 20:41:48.057996] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.256 20:41:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.256 20:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.257 [2024-07-15 20:41:48.111921] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.257 20:41:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.257 20:41:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.257 20:41:48 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.257 20:41:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.257 20:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.257 ************************************ 00:05:44.257 START TEST scheduler_create_thread 00:05:44.257 ************************************ 00:05:44.257 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:44.257 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.257 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.257 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 2 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 3 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 4 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 5 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 6 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 7 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 8 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.518 9 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.518 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.105 10 00:05:45.105 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.105 20:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.105 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.105 20:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.519 20:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.519 20:41:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.519 20:41:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.519 20:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.519 20:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.089 20:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.089 20:41:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.089 20:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.089 20:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.031 20:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.031 20:41:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.031 20:41:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.031 20:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.031 20:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.606 20:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.606 00:05:48.606 real 0m4.222s 00:05:48.606 user 0m0.024s 00:05:48.606 sys 0m0.007s 00:05:48.606 20:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.606 20:41:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.606 ************************************ 00:05:48.606 END TEST scheduler_create_thread 00:05:48.606 ************************************ 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:48.606 20:41:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.606 20:41:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1371055 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1371055 ']' 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1371055 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1371055 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1371055' 00:05:48.606 killing process with pid 1371055 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1371055 00:05:48.606 20:41:52 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1371055 00:05:48.866 [2024-07-15 20:41:52.649149] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.126 00:05:49.126 real 0m5.706s 00:05:49.126 user 0m12.763s 00:05:49.126 sys 0m0.354s 00:05:49.126 20:41:52 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.126 20:41:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.126 ************************************ 00:05:49.126 END TEST event_scheduler 00:05:49.126 ************************************ 00:05:49.126 20:41:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:49.126 20:41:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:49.126 20:41:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:49.126 20:41:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.126 20:41:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.126 20:41:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.126 ************************************ 00:05:49.126 START TEST app_repeat 00:05:49.126 ************************************ 00:05:49.126 20:41:52 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1372192 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1372192' 00:05:49.127 Process app_repeat pid: 1372192 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:49.127 spdk_app_start Round 0 00:05:49.127 20:41:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1372192 /var/tmp/spdk-nbd.sock 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1372192 ']' 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.127 20:41:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.127 [2024-07-15 20:41:52.942235] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:05:49.127 [2024-07-15 20:41:52.942325] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372192 ] 00:05:49.127 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.127 [2024-07-15 20:41:53.004513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.388 [2024-07-15 20:41:53.075421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.388 [2024-07-15 20:41:53.075424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.957 20:41:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.957 20:41:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:49.957 20:41:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.218 Malloc0 00:05:50.218 20:41:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.218 Malloc1 00:05:50.218 20:41:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.218 20:41:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.478 /dev/nbd0 00:05:50.478 20:41:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.478 20:41:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.478 1+0 records in 00:05:50.478 1+0 records out 00:05:50.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289345 s, 14.2 MB/s 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.478 20:41:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.478 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.478 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.478 20:41:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.739 /dev/nbd1 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.739 1+0 records in 00:05:50.739 1+0 records out 00:05:50.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242447 s, 16.9 MB/s 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.739 20:41:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.739 { 00:05:50.739 "nbd_device": "/dev/nbd0", 00:05:50.739 "bdev_name": "Malloc0" 00:05:50.739 }, 00:05:50.739 { 00:05:50.739 "nbd_device": "/dev/nbd1", 00:05:50.739 "bdev_name": "Malloc1" 00:05:50.739 } 00:05:50.739 ]' 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.739 { 00:05:50.739 "nbd_device": "/dev/nbd0", 00:05:50.739 "bdev_name": "Malloc0" 00:05:50.739 }, 00:05:50.739 { 00:05:50.739 "nbd_device": "/dev/nbd1", 00:05:50.739 "bdev_name": "Malloc1" 00:05:50.739 } 00:05:50.739 ]' 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.739 /dev/nbd1' 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.739 /dev/nbd1' 00:05:50.739 20:41:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.999 20:41:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.999 20:41:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.000 256+0 records in 00:05:51.000 256+0 records out 00:05:51.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124439 s, 84.3 MB/s 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.000 256+0 records in 00:05:51.000 256+0 records out 00:05:51.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158092 s, 66.3 MB/s 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.000 256+0 records in 00:05:51.000 256+0 records out 00:05:51.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.039351 s, 26.6 MB/s 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.000 20:41:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.259 20:41:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.259 20:41:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.519 20:41:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.519 20:41:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.780 20:41:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.780 [2024-07-15 20:41:55.577936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.780 [2024-07-15 20:41:55.642207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.780 [2024-07-15 20:41:55.642210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.041 [2024-07-15 20:41:55.673514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.041 [2024-07-15 20:41:55.673545] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.584 20:41:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.584 20:41:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.584 spdk_app_start Round 1 00:05:54.584 20:41:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1372192 /var/tmp/spdk-nbd.sock 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1372192 ']' 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.584 20:41:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.844 20:41:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.844 20:41:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.844 20:41:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.105 Malloc0 00:05:55.105 20:41:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.105 Malloc1 00:05:55.105 20:41:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.105 20:41:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.365 /dev/nbd0 00:05:55.365 20:41:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.365 20:41:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.365 1+0 records in 00:05:55.365 1+0 records out 00:05:55.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002473 s, 16.6 MB/s 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.365 20:41:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.365 20:41:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.365 20:41:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.365 20:41:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.625 /dev/nbd1 00:05:55.625 20:41:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.625 20:41:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.625 1+0 records in 00:05:55.625 1+0 records out 00:05:55.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275453 s, 14.9 MB/s 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.625 20:41:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.625 20:41:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.625 20:41:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.626 { 00:05:55.626 "nbd_device": "/dev/nbd0", 00:05:55.626 "bdev_name": "Malloc0" 00:05:55.626 }, 00:05:55.626 { 00:05:55.626 "nbd_device": "/dev/nbd1", 00:05:55.626 "bdev_name": "Malloc1" 00:05:55.626 } 00:05:55.626 ]' 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.626 { 00:05:55.626 "nbd_device": "/dev/nbd0", 00:05:55.626 "bdev_name": "Malloc0" 00:05:55.626 }, 00:05:55.626 { 00:05:55.626 "nbd_device": "/dev/nbd1", 00:05:55.626 "bdev_name": "Malloc1" 00:05:55.626 } 00:05:55.626 ]' 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.626 /dev/nbd1' 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.626 /dev/nbd1' 00:05:55.626 20:41:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.886 256+0 records in 00:05:55.886 256+0 records out 00:05:55.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124212 s, 84.4 MB/s 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.886 256+0 records in 00:05:55.886 256+0 records out 00:05:55.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247837 s, 42.3 MB/s 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.886 256+0 records in 00:05:55.886 256+0 records out 00:05:55.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170417 s, 61.5 MB/s 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.886 20:41:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.887 20:41:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.147 20:41:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.407 20:42:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.407 20:42:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.667 20:42:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.667 [2024-07-15 20:42:00.468906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.667 [2024-07-15 20:42:00.533601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.667 [2024-07-15 20:42:00.533603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.926 [2024-07-15 20:42:00.565730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.926 [2024-07-15 20:42:00.565762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.468 20:42:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.468 20:42:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.468 spdk_app_start Round 2 00:05:59.468 20:42:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1372192 /var/tmp/spdk-nbd.sock 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1372192 ']' 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.468 20:42:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.727 20:42:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.727 20:42:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.727 20:42:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.987 Malloc0 00:05:59.987 20:42:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.987 Malloc1 00:05:59.987 20:42:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.987 20:42:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.248 /dev/nbd0 00:06:00.248 20:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.248 20:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.248 1+0 records in 00:06:00.248 1+0 records out 00:06:00.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270539 s, 15.1 MB/s 00:06:00.248 20:42:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.248 20:42:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.248 20:42:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.248 20:42:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.248 20:42:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.248 20:42:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.248 20:42:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.248 20:42:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.510 /dev/nbd1 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.510 1+0 records in 00:06:00.510 1+0 records out 00:06:00.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207583 s, 19.7 MB/s 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.510 20:42:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.510 { 00:06:00.510 "nbd_device": "/dev/nbd0", 00:06:00.510 "bdev_name": "Malloc0" 00:06:00.510 }, 00:06:00.510 { 00:06:00.510 "nbd_device": "/dev/nbd1", 00:06:00.510 "bdev_name": "Malloc1" 00:06:00.510 } 00:06:00.510 ]' 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.510 { 00:06:00.510 "nbd_device": "/dev/nbd0", 00:06:00.510 "bdev_name": "Malloc0" 00:06:00.510 }, 00:06:00.510 { 00:06:00.510 "nbd_device": "/dev/nbd1", 00:06:00.510 "bdev_name": "Malloc1" 00:06:00.510 } 00:06:00.510 ]' 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.510 /dev/nbd1' 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.510 /dev/nbd1' 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.510 20:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.771 256+0 records in 00:06:00.771 256+0 records out 00:06:00.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012189 s, 86.0 MB/s 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.771 256+0 records in 00:06:00.771 256+0 records out 00:06:00.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015853 s, 66.1 MB/s 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.771 256+0 records in 00:06:00.771 256+0 records out 00:06:00.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198086 s, 52.9 MB/s 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.771 20:42:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.032 20:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.293 20:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.293 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.293 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.293 20:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.293 20:42:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.293 20:42:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.293 20:42:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.554 [2024-07-15 20:42:05.303210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.554 [2024-07-15 20:42:05.367570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.554 [2024-07-15 20:42:05.367573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.554 [2024-07-15 20:42:05.399002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.554 [2024-07-15 20:42:05.399035] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.863 20:42:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1372192 /var/tmp/spdk-nbd.sock 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1372192 ']' 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.863 20:42:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1372192 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1372192 ']' 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1372192 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1372192 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1372192' 00:06:04.863 killing process with pid 1372192 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1372192 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1372192 00:06:04.863 spdk_app_start is called in Round 0. 00:06:04.863 Shutdown signal received, stop current app iteration 00:06:04.863 Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 reinitialization... 00:06:04.863 spdk_app_start is called in Round 1. 00:06:04.863 Shutdown signal received, stop current app iteration 00:06:04.863 Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 reinitialization... 00:06:04.863 spdk_app_start is called in Round 2. 00:06:04.863 Shutdown signal received, stop current app iteration 00:06:04.863 Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 reinitialization... 00:06:04.863 spdk_app_start is called in Round 3. 00:06:04.863 Shutdown signal received, stop current app iteration 00:06:04.863 20:42:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.863 20:42:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.863 00:06:04.863 real 0m15.589s 00:06:04.863 user 0m33.538s 00:06:04.863 sys 0m2.137s 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.863 20:42:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 ************************************ 00:06:04.863 END TEST app_repeat 00:06:04.863 ************************************ 00:06:04.863 20:42:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.863 20:42:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.863 20:42:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.863 20:42:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.863 20:42:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.863 20:42:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 ************************************ 00:06:04.863 START TEST cpu_locks 00:06:04.863 ************************************ 00:06:04.863 20:42:08 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:04.863 * Looking for test storage... 00:06:04.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:04.863 20:42:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.863 20:42:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.864 20:42:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.864 20:42:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.864 20:42:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.864 20:42:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.864 20:42:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.864 ************************************ 00:06:04.864 START TEST default_locks 00:06:04.864 ************************************ 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1375584 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1375584 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1375584 ']' 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.864 20:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.166 [2024-07-15 20:42:08.775939] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:05.166 [2024-07-15 20:42:08.776005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375584 ] 00:06:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.166 [2024-07-15 20:42:08.839284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.166 [2024-07-15 20:42:08.915713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.735 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.735 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:05.735 20:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1375584 00:06:05.735 20:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1375584 00:06:05.735 20:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.304 lslocks: write error 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1375584 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1375584 ']' 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1375584 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375584 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375584' 00:06:06.304 killing process with pid 1375584 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1375584 00:06:06.304 20:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1375584 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1375584 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1375584 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1375584 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1375584 ']' 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1375584) - No such process 00:06:06.304 ERROR: process (pid: 1375584) is no longer running 00:06:06.304 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.565 00:06:06.565 real 0m1.490s 00:06:06.565 user 0m1.554s 00:06:06.565 sys 0m0.521s 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.565 20:42:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.565 ************************************ 00:06:06.565 END TEST default_locks 00:06:06.565 ************************************ 00:06:06.565 20:42:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.565 20:42:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.565 20:42:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.565 20:42:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.565 20:42:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.565 ************************************ 00:06:06.565 START TEST default_locks_via_rpc 00:06:06.565 ************************************ 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1376256 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1376256 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1376256 ']' 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.565 20:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.565 [2024-07-15 20:42:10.326277] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:06.565 [2024-07-15 20:42:10.326331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376256 ] 00:06:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.565 [2024-07-15 20:42:10.387458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.825 [2024-07-15 20:42:10.457841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1376256 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.397 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1376256 ']' 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1376256' 00:06:07.967 killing process with pid 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1376256 00:06:07.967 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1376256 00:06:08.228 00:06:08.229 real 0m1.606s 00:06:08.229 user 0m1.706s 00:06:08.229 sys 0m0.505s 00:06:08.229 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.229 20:42:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.229 ************************************ 00:06:08.229 END TEST default_locks_via_rpc 00:06:08.229 ************************************ 00:06:08.229 20:42:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.229 20:42:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.229 20:42:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.229 20:42:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.229 20:42:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.229 ************************************ 00:06:08.229 START TEST non_locking_app_on_locked_coremask 00:06:08.229 ************************************ 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1376750 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1376750 /var/tmp/spdk.sock 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1376750 ']' 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.229 20:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.229 [2024-07-15 20:42:12.017321] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:08.229 [2024-07-15 20:42:12.017390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376750 ] 00:06:08.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.229 [2024-07-15 20:42:12.078797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.489 [2024-07-15 20:42:12.149457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1377081 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1377081 /var/tmp/spdk2.sock 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1377081 ']' 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.059 20:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.059 [2024-07-15 20:42:12.803120] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:09.059 [2024-07-15 20:42:12.803173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377081 ] 00:06:09.059 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.059 [2024-07-15 20:42:12.890747] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.059 [2024-07-15 20:42:12.890774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.319 [2024-07-15 20:42:13.024779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.890 20:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.890 20:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.890 20:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1376750 00:06:09.890 20:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1376750 00:06:09.890 20:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.460 lslocks: write error 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1376750 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1376750 ']' 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1376750 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1376750 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1376750' 00:06:10.460 killing process with pid 1376750 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1376750 00:06:10.460 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1376750 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1377081 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1377081 ']' 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1377081 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.720 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377081 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377081' 00:06:10.980 killing process with pid 1377081 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1377081 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1377081 00:06:10.980 00:06:10.980 real 0m2.884s 00:06:10.980 user 0m3.117s 00:06:10.980 sys 0m0.841s 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.980 20:42:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.980 ************************************ 00:06:10.980 END TEST non_locking_app_on_locked_coremask 00:06:10.980 ************************************ 00:06:10.980 20:42:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.240 20:42:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:11.240 20:42:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.240 20:42:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.240 20:42:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.240 ************************************ 00:06:11.240 START TEST locking_app_on_unlocked_coremask 00:06:11.240 ************************************ 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1377456 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1377456 /var/tmp/spdk.sock 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1377456 ']' 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.240 20:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.240 [2024-07-15 20:42:14.963393] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:11.240 [2024-07-15 20:42:14.963439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377456 ] 00:06:11.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.240 [2024-07-15 20:42:15.021953] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.240 [2024-07-15 20:42:15.021985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.240 [2024-07-15 20:42:15.085366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1377647 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1377647 /var/tmp/spdk2.sock 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1377647 ']' 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.180 20:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.180 [2024-07-15 20:42:15.785685] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:12.180 [2024-07-15 20:42:15.785739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377647 ] 00:06:12.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.180 [2024-07-15 20:42:15.872314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.180 [2024-07-15 20:42:16.005635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.748 20:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.748 20:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.748 20:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1377647 00:06:12.748 20:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1377647 00:06:12.748 20:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.318 lslocks: write error 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1377456 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1377456 ']' 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1377456 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.318 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377456 00:06:13.578 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.578 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.579 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377456' 00:06:13.579 killing process with pid 1377456 00:06:13.579 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1377456 00:06:13.579 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1377456 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1377647 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1377647 ']' 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1377647 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377647 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377647' 00:06:13.838 killing process with pid 1377647 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1377647 00:06:13.838 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1377647 00:06:14.098 00:06:14.098 real 0m2.999s 00:06:14.098 user 0m3.288s 00:06:14.098 sys 0m0.875s 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.098 ************************************ 00:06:14.098 END TEST locking_app_on_unlocked_coremask 00:06:14.098 ************************************ 00:06:14.098 20:42:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.098 20:42:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:14.098 20:42:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.098 20:42:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.098 20:42:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.098 ************************************ 00:06:14.098 START TEST locking_app_on_locked_coremask 00:06:14.098 ************************************ 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1378162 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1378162 /var/tmp/spdk.sock 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1378162 ']' 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.098 20:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.359 [2024-07-15 20:42:18.031083] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:14.359 [2024-07-15 20:42:18.031136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378162 ] 00:06:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.359 [2024-07-15 20:42:18.091402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.359 [2024-07-15 20:42:18.157999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1378197 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1378197 /var/tmp/spdk2.sock 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1378197 /var/tmp/spdk2.sock 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1378197 /var/tmp/spdk2.sock 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1378197 ']' 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.929 20:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.189 [2024-07-15 20:42:18.853103] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:15.189 [2024-07-15 20:42:18.853168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378197 ] 00:06:15.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.189 [2024-07-15 20:42:18.942425] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1378162 has claimed it. 00:06:15.189 [2024-07-15 20:42:18.942464] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1378197) - No such process 00:06:15.760 ERROR: process (pid: 1378197) is no longer running 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1378162 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1378162 00:06:15.760 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.020 lslocks: write error 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1378162 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1378162 ']' 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1378162 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.020 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378162 00:06:16.280 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.280 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.280 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378162' 00:06:16.280 killing process with pid 1378162 00:06:16.280 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1378162 00:06:16.280 20:42:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1378162 00:06:16.280 00:06:16.280 real 0m2.195s 00:06:16.280 user 0m2.440s 00:06:16.280 sys 0m0.599s 00:06:16.280 20:42:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.280 20:42:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.280 ************************************ 00:06:16.280 END TEST locking_app_on_locked_coremask 00:06:16.280 ************************************ 00:06:16.541 20:42:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.541 20:42:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.541 20:42:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.541 20:42:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.541 20:42:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.541 ************************************ 00:06:16.541 START TEST locking_overlapped_coremask 00:06:16.541 ************************************ 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1378541 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1378541 /var/tmp/spdk.sock 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1378541 ']' 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.541 20:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.541 [2024-07-15 20:42:20.309095] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:16.541 [2024-07-15 20:42:20.309148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378541 ] 00:06:16.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.541 [2024-07-15 20:42:20.368897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.801 [2024-07-15 20:42:20.434619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.801 [2024-07-15 20:42:20.434736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.801 [2024-07-15 20:42:20.434739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1378871 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1378871 /var/tmp/spdk2.sock 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1378871 /var/tmp/spdk2.sock 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1378871 /var/tmp/spdk2.sock 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1378871 ']' 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.371 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.371 [2024-07-15 20:42:21.124703] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:17.371 [2024-07-15 20:42:21.124760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378871 ] 00:06:17.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.371 [2024-07-15 20:42:21.195884] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1378541 has claimed it. 00:06:17.371 [2024-07-15 20:42:21.195918] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1378871) - No such process 00:06:17.941 ERROR: process (pid: 1378871) is no longer running 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1378541 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1378541 ']' 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1378541 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378541 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378541' 00:06:17.941 killing process with pid 1378541 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1378541 00:06:17.941 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1378541 00:06:18.201 00:06:18.201 real 0m1.755s 00:06:18.201 user 0m4.966s 00:06:18.201 sys 0m0.349s 00:06:18.201 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.201 20:42:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.201 ************************************ 00:06:18.201 END TEST locking_overlapped_coremask 00:06:18.201 ************************************ 00:06:18.201 20:42:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.201 20:42:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.201 20:42:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.201 20:42:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.201 20:42:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.201 ************************************ 00:06:18.201 START TEST locking_overlapped_coremask_via_rpc 00:06:18.201 ************************************ 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1378916 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1378916 /var/tmp/spdk.sock 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1378916 ']' 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.201 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.461 [2024-07-15 20:42:22.125725] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:18.461 [2024-07-15 20:42:22.125769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378916 ] 00:06:18.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.461 [2024-07-15 20:42:22.185461] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.461 [2024-07-15 20:42:22.185493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.461 [2024-07-15 20:42:22.251660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.461 [2024-07-15 20:42:22.251778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.461 [2024-07-15 20:42:22.251780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1379089 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1379089 /var/tmp/spdk2.sock 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1379089 ']' 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.721 20:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 [2024-07-15 20:42:22.483721] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:18.721 [2024-07-15 20:42:22.483777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379089 ] 00:06:18.721 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.721 [2024-07-15 20:42:22.554844] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.721 [2024-07-15 20:42:22.554870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.981 [2024-07-15 20:42:22.664883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.981 [2024-07-15 20:42:22.665040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.981 [2024-07-15 20:42:22.665042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.552 [2024-07-15 20:42:23.261183] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1378916 has claimed it. 00:06:19.552 request: 00:06:19.552 { 00:06:19.552 "method": "framework_enable_cpumask_locks", 00:06:19.552 "req_id": 1 00:06:19.552 } 00:06:19.552 Got JSON-RPC error response 00:06:19.552 response: 00:06:19.552 { 00:06:19.552 "code": -32603, 00:06:19.552 "message": "Failed to claim CPU core: 2" 00:06:19.552 } 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1378916 /var/tmp/spdk.sock 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1378916 ']' 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.552 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.553 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.553 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1379089 /var/tmp/spdk2.sock 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1379089 ']' 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.812 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.813 00:06:19.813 real 0m1.539s 00:06:19.813 user 0m0.697s 00:06:19.813 sys 0m0.137s 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.813 20:42:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.813 ************************************ 00:06:19.813 END TEST locking_overlapped_coremask_via_rpc 00:06:19.813 ************************************ 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.813 20:42:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.813 20:42:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1378916 ]] 00:06:19.813 20:42:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1378916 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1378916 ']' 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1378916 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378916 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378916' 00:06:19.813 killing process with pid 1378916 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1378916 00:06:19.813 20:42:23 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1378916 00:06:20.072 20:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1379089 ]] 00:06:20.072 20:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1379089 00:06:20.072 20:42:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1379089 ']' 00:06:20.072 20:42:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1379089 00:06:20.072 20:42:23 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:20.072 20:42:23 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.072 20:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1379089 00:06:20.332 20:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:20.332 20:42:23 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:20.332 20:42:23 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1379089' 00:06:20.332 killing process with pid 1379089 00:06:20.332 20:42:23 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1379089 00:06:20.332 20:42:23 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1379089 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1378916 ]] 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1378916 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1378916 ']' 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1378916 00:06:20.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1378916) - No such process 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1378916 is not found' 00:06:20.332 Process with pid 1378916 is not found 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1379089 ]] 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1379089 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1379089 ']' 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1379089 00:06:20.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1379089) - No such process 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1379089 is not found' 00:06:20.332 Process with pid 1379089 is not found 00:06:20.332 20:42:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:20.332 00:06:20.332 real 0m15.606s 00:06:20.332 user 0m25.988s 00:06:20.332 sys 0m4.673s 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.332 20:42:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.332 ************************************ 00:06:20.332 END TEST cpu_locks 00:06:20.332 ************************************ 00:06:20.332 20:42:24 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.332 00:06:20.332 real 0m41.084s 00:06:20.332 user 1m18.901s 00:06:20.332 sys 0m7.753s 00:06:20.332 20:42:24 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.332 20:42:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.332 ************************************ 00:06:20.332 END TEST event 00:06:20.332 ************************************ 00:06:20.592 20:42:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.592 20:42:24 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:20.592 20:42:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.592 20:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.592 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:06:20.592 ************************************ 00:06:20.592 START TEST thread 00:06:20.592 ************************************ 00:06:20.592 20:42:24 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:20.592 * Looking for test storage... 00:06:20.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:20.592 20:42:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.592 20:42:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:20.592 20:42:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.592 20:42:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.592 ************************************ 00:06:20.592 START TEST thread_poller_perf 00:06:20.592 ************************************ 00:06:20.592 20:42:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:20.592 [2024-07-15 20:42:24.449331] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:20.592 [2024-07-15 20:42:24.449446] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379674 ] 00:06:20.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.852 [2024-07-15 20:42:24.516263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.852 [2024-07-15 20:42:24.589946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.852 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:21.843 ====================================== 00:06:21.843 busy:2412403526 (cyc) 00:06:21.843 total_run_count: 287000 00:06:21.843 tsc_hz: 2400000000 (cyc) 00:06:21.843 ====================================== 00:06:21.843 poller_cost: 8405 (cyc), 3502 (nsec) 00:06:21.843 00:06:21.843 real 0m1.226s 00:06:21.843 user 0m1.141s 00:06:21.843 sys 0m0.080s 00:06:21.843 20:42:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.843 20:42:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 END TEST thread_poller_perf 00:06:21.843 ************************************ 00:06:21.843 20:42:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:21.843 20:42:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.843 20:42:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:21.843 20:42:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.843 20:42:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.119 ************************************ 00:06:22.119 START TEST thread_poller_perf 00:06:22.119 ************************************ 00:06:22.119 20:42:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:22.119 [2024-07-15 20:42:25.750971] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:22.119 [2024-07-15 20:42:25.751076] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379825 ] 00:06:22.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.119 [2024-07-15 20:42:25.814949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.119 [2024-07-15 20:42:25.882548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.119 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.056 ====================================== 00:06:23.056 busy:2402265042 (cyc) 00:06:23.056 total_run_count: 3809000 00:06:23.056 tsc_hz: 2400000000 (cyc) 00:06:23.056 ====================================== 00:06:23.056 poller_cost: 630 (cyc), 262 (nsec) 00:06:23.056 00:06:23.056 real 0m1.208s 00:06:23.056 user 0m1.142s 00:06:23.056 sys 0m0.062s 00:06:23.056 20:42:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.056 20:42:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.056 ************************************ 00:06:23.056 END TEST thread_poller_perf 00:06:23.056 ************************************ 00:06:23.317 20:42:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:23.317 20:42:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:23.317 00:06:23.317 real 0m2.692s 00:06:23.317 user 0m2.377s 00:06:23.317 sys 0m0.324s 00:06:23.317 20:42:26 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.317 20:42:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 END TEST thread 00:06:23.317 ************************************ 00:06:23.317 20:42:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.317 20:42:27 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:23.317 20:42:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.317 20:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.317 20:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 START TEST accel 00:06:23.317 ************************************ 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:23.317 * Looking for test storage... 00:06:23.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:23.317 20:42:27 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:23.317 20:42:27 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:23.317 20:42:27 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.317 20:42:27 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1380108 00:06:23.317 20:42:27 accel -- accel/accel.sh@63 -- # waitforlisten 1380108 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@829 -- # '[' -z 1380108 ']' 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.317 20:42:27 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.317 20:42:27 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:23.317 20:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 20:42:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.317 20:42:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.317 20:42:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.317 20:42:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.317 20:42:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.317 20:42:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:23.317 20:42:27 accel -- accel/accel.sh@41 -- # jq -r . 00:06:23.317 [2024-07-15 20:42:27.180334] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:23.317 [2024-07-15 20:42:27.180394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380108 ] 00:06:23.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.577 [2024-07-15 20:42:27.246652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.577 [2024-07-15 20:42:27.320618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.147 20:42:27 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.147 20:42:27 accel -- common/autotest_common.sh@862 -- # return 0 00:06:24.147 20:42:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:24.147 20:42:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:24.147 20:42:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:24.147 20:42:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:24.147 20:42:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:24.147 20:42:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:24.147 20:42:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:24.147 20:42:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.147 20:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.147 20:42:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.147 20:42:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # IFS== 00:06:24.147 20:42:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:24.147 20:42:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:24.147 20:42:28 accel -- accel/accel.sh@75 -- # killprocess 1380108 00:06:24.147 20:42:28 accel -- common/autotest_common.sh@948 -- # '[' -z 1380108 ']' 00:06:24.147 20:42:28 accel -- common/autotest_common.sh@952 -- # kill -0 1380108 00:06:24.147 20:42:28 accel -- common/autotest_common.sh@953 -- # uname 00:06:24.147 20:42:28 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.147 20:42:28 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1380108 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1380108' 00:06:24.407 killing process with pid 1380108 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@967 -- # kill 1380108 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@972 -- # wait 1380108 00:06:24.407 20:42:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:24.407 20:42:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.407 20:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.667 20:42:28 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:24.667 20:42:28 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:24.667 20:42:28 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.667 20:42:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:24.667 20:42:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.667 20:42:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:24.667 20:42:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.667 20:42:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.667 20:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.667 ************************************ 00:06:24.667 START TEST accel_missing_filename 00:06:24.667 ************************************ 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.667 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:24.667 20:42:28 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:24.667 [2024-07-15 20:42:28.442969] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:24.667 [2024-07-15 20:42:28.443063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380474 ] 00:06:24.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.667 [2024-07-15 20:42:28.504661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.927 [2024-07-15 20:42:28.570140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.927 [2024-07-15 20:42:28.601906] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.927 [2024-07-15 20:42:28.638597] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:24.927 A filename is required. 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.927 00:06:24.927 real 0m0.276s 00:06:24.927 user 0m0.215s 00:06:24.927 sys 0m0.103s 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.927 20:42:28 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:24.927 ************************************ 00:06:24.927 END TEST accel_missing_filename 00:06:24.927 ************************************ 00:06:24.927 20:42:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.927 20:42:28 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.927 20:42:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:24.927 20:42:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.927 20:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.927 ************************************ 00:06:24.927 START TEST accel_compress_verify 00:06:24.927 ************************************ 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.927 20:42:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:24.927 20:42:28 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:24.927 [2024-07-15 20:42:28.789226] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:24.928 [2024-07-15 20:42:28.789322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380497 ] 00:06:24.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.187 [2024-07-15 20:42:28.853974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.187 [2024-07-15 20:42:28.922777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.187 [2024-07-15 20:42:28.954687] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.187 [2024-07-15 20:42:28.991761] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:25.187 00:06:25.187 Compression does not support the verify option, aborting. 00:06:25.187 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:25.187 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.188 00:06:25.188 real 0m0.284s 00:06:25.188 user 0m0.219s 00:06:25.188 sys 0m0.104s 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.188 20:42:29 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:25.188 ************************************ 00:06:25.188 END TEST accel_compress_verify 00:06:25.188 ************************************ 00:06:25.188 20:42:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.188 20:42:29 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:25.188 20:42:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.188 20:42:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.188 20:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.449 ************************************ 00:06:25.449 START TEST accel_wrong_workload 00:06:25.449 ************************************ 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:25.449 20:42:29 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:25.449 Unsupported workload type: foobar 00:06:25.449 [2024-07-15 20:42:29.141873] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:25.449 accel_perf options: 00:06:25.449 [-h help message] 00:06:25.449 [-q queue depth per core] 00:06:25.449 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.449 [-T number of threads per core 00:06:25.449 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.449 [-t time in seconds] 00:06:25.449 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.449 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:25.449 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.449 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.449 [-S for crc32c workload, use this seed value (default 0) 00:06:25.449 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.449 [-f for fill workload, use this BYTE value (default 255) 00:06:25.449 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.449 [-y verify result if this switch is on] 00:06:25.449 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.449 Can be used to spread operations across a wider range of memory. 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.449 00:06:25.449 real 0m0.033s 00:06:25.449 user 0m0.020s 00:06:25.449 sys 0m0.013s 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.449 20:42:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:25.449 ************************************ 00:06:25.449 END TEST accel_wrong_workload 00:06:25.449 ************************************ 00:06:25.449 Error: writing output failed: Broken pipe 00:06:25.449 20:42:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.449 20:42:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.449 20:42:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:25.449 20:42:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.449 20:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.449 ************************************ 00:06:25.449 START TEST accel_negative_buffers 00:06:25.449 ************************************ 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:25.449 20:42:29 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:25.449 -x option must be non-negative. 00:06:25.449 [2024-07-15 20:42:29.246069] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:25.449 accel_perf options: 00:06:25.449 [-h help message] 00:06:25.449 [-q queue depth per core] 00:06:25.449 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:25.449 [-T number of threads per core 00:06:25.449 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:25.449 [-t time in seconds] 00:06:25.449 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:25.449 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:25.449 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:25.449 [-l for compress/decompress workloads, name of uncompressed input file 00:06:25.449 [-S for crc32c workload, use this seed value (default 0) 00:06:25.449 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:25.449 [-f for fill workload, use this BYTE value (default 255) 00:06:25.449 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:25.449 [-y verify result if this switch is on] 00:06:25.449 [-a tasks to allocate per core (default: same value as -q)] 00:06:25.449 Can be used to spread operations across a wider range of memory. 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.449 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.449 00:06:25.449 real 0m0.034s 00:06:25.450 user 0m0.023s 00:06:25.450 sys 0m0.010s 00:06:25.450 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.450 20:42:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:25.450 ************************************ 00:06:25.450 END TEST accel_negative_buffers 00:06:25.450 ************************************ 00:06:25.450 Error: writing output failed: Broken pipe 00:06:25.450 20:42:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.450 20:42:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:25.450 20:42:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:25.450 20:42:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.450 20:42:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.450 ************************************ 00:06:25.450 START TEST accel_crc32c 00:06:25.450 ************************************ 00:06:25.450 20:42:29 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:25.450 20:42:29 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:25.710 [2024-07-15 20:42:29.353687] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:25.710 [2024-07-15 20:42:29.353787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380779 ] 00:06:25.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.710 [2024-07-15 20:42:29.419291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.710 [2024-07-15 20:42:29.494137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.710 20:42:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:27.094 20:42:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.094 00:06:27.094 real 0m1.300s 00:06:27.094 user 0m1.201s 00:06:27.094 sys 0m0.110s 00:06:27.094 20:42:30 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.094 20:42:30 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.094 ************************************ 00:06:27.094 END TEST accel_crc32c 00:06:27.094 ************************************ 00:06:27.094 20:42:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.094 20:42:30 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:27.094 20:42:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.094 20:42:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.094 20:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.094 ************************************ 00:06:27.094 START TEST accel_crc32c_C2 00:06:27.094 ************************************ 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.094 [2024-07-15 20:42:30.729021] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:27.094 [2024-07-15 20:42:30.729100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380968 ] 00:06:27.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.094 [2024-07-15 20:42:30.789652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.094 [2024-07-15 20:42:30.855288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.094 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.095 20:42:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.476 00:06:28.476 real 0m1.283s 00:06:28.476 user 0m1.201s 00:06:28.476 sys 0m0.095s 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.476 20:42:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:28.476 ************************************ 00:06:28.476 END TEST accel_crc32c_C2 00:06:28.476 ************************************ 00:06:28.476 20:42:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.476 20:42:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:28.476 20:42:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.476 20:42:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.476 20:42:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.476 ************************************ 00:06:28.476 START TEST accel_copy 00:06:28.476 ************************************ 00:06:28.476 20:42:32 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.476 20:42:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.476 [2024-07-15 20:42:32.090049] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:28.476 [2024-07-15 20:42:32.090117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381265 ] 00:06:28.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.476 [2024-07-15 20:42:32.153174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.477 [2024-07-15 20:42:32.224200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.477 20:42:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:29.859 20:42:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.859 00:06:29.859 real 0m1.292s 00:06:29.859 user 0m1.200s 00:06:29.859 sys 0m0.102s 00:06:29.859 20:42:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.859 20:42:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 ************************************ 00:06:29.859 END TEST accel_copy 00:06:29.859 ************************************ 00:06:29.859 20:42:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.859 20:42:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.859 20:42:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:29.859 20:42:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.859 20:42:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.859 ************************************ 00:06:29.859 START TEST accel_fill 00:06:29.859 ************************************ 00:06:29.859 20:42:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:29.859 [2024-07-15 20:42:33.456116] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:29.859 [2024-07-15 20:42:33.456195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381621 ] 00:06:29.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.859 [2024-07-15 20:42:33.520051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.859 [2024-07-15 20:42:33.588429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.859 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:29.860 20:42:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:31.243 20:42:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.243 00:06:31.243 real 0m1.291s 00:06:31.243 user 0m1.201s 00:06:31.243 sys 0m0.101s 00:06:31.243 20:42:34 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.243 20:42:34 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:31.243 ************************************ 00:06:31.243 END TEST accel_fill 00:06:31.243 ************************************ 00:06:31.243 20:42:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.243 20:42:34 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:31.243 20:42:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.243 20:42:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.243 20:42:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.243 ************************************ 00:06:31.243 START TEST accel_copy_crc32c 00:06:31.243 ************************************ 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.243 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:31.244 [2024-07-15 20:42:34.822040] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:31.244 [2024-07-15 20:42:34.822143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1381968 ] 00:06:31.244 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.244 [2024-07-15 20:42:34.887151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.244 [2024-07-15 20:42:34.957727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.244 20:42:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.628 00:06:32.628 real 0m1.296s 00:06:32.628 user 0m1.202s 00:06:32.628 sys 0m0.106s 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.628 20:42:36 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:32.628 ************************************ 00:06:32.628 END TEST accel_copy_crc32c 00:06:32.628 ************************************ 00:06:32.628 20:42:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.628 20:42:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.628 20:42:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.628 20:42:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.628 20:42:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.628 ************************************ 00:06:32.628 START TEST accel_copy_crc32c_C2 00:06:32.628 ************************************ 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.628 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:32.629 [2024-07-15 20:42:36.190548] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:32.629 [2024-07-15 20:42:36.190613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382252 ] 00:06:32.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.629 [2024-07-15 20:42:36.253843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.629 [2024-07-15 20:42:36.324108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.629 20:42:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.568 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.569 00:06:33.569 real 0m1.293s 00:06:33.569 user 0m1.204s 00:06:33.569 sys 0m0.101s 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.569 20:42:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.569 ************************************ 00:06:33.569 END TEST accel_copy_crc32c_C2 00:06:33.569 ************************************ 00:06:33.830 20:42:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.830 20:42:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:33.830 20:42:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.830 20:42:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.830 20:42:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.830 ************************************ 00:06:33.830 START TEST accel_dualcast 00:06:33.830 ************************************ 00:06:33.830 20:42:37 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:33.830 20:42:37 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:33.830 [2024-07-15 20:42:37.561188] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:33.830 [2024-07-15 20:42:37.561283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382449 ] 00:06:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.830 [2024-07-15 20:42:37.622986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.830 [2024-07-15 20:42:37.690717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.090 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.091 20:42:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:35.033 20:42:38 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.033 00:06:35.033 real 0m1.288s 00:06:35.033 user 0m1.200s 00:06:35.033 sys 0m0.099s 00:06:35.033 20:42:38 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.033 20:42:38 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:35.033 ************************************ 00:06:35.033 END TEST accel_dualcast 00:06:35.033 ************************************ 00:06:35.033 20:42:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.033 20:42:38 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:35.033 20:42:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:35.033 20:42:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.033 20:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.033 ************************************ 00:06:35.033 START TEST accel_compare 00:06:35.033 ************************************ 00:06:35.033 20:42:38 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:35.033 20:42:38 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:35.294 [2024-07-15 20:42:38.925820] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:35.294 [2024-07-15 20:42:38.925914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382709 ] 00:06:35.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.294 [2024-07-15 20:42:38.986698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.294 [2024-07-15 20:42:39.053209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:35.294 20:42:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:36.678 20:42:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.678 00:06:36.678 real 0m1.288s 00:06:36.678 user 0m1.200s 00:06:36.678 sys 0m0.098s 00:06:36.678 20:42:40 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.678 20:42:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:36.678 ************************************ 00:06:36.678 END TEST accel_compare 00:06:36.678 ************************************ 00:06:36.678 20:42:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.678 20:42:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:36.678 20:42:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.678 20:42:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.678 20:42:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.678 ************************************ 00:06:36.678 START TEST accel_xor 00:06:36.678 ************************************ 00:06:36.678 20:42:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:36.678 [2024-07-15 20:42:40.287197] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:36.678 [2024-07-15 20:42:40.287265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383062 ] 00:06:36.678 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.678 [2024-07-15 20:42:40.349305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.678 [2024-07-15 20:42:40.417184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.678 20:42:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.061 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.062 00:06:38.062 real 0m1.286s 00:06:38.062 user 0m1.199s 00:06:38.062 sys 0m0.099s 00:06:38.062 20:42:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.062 20:42:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:38.062 ************************************ 00:06:38.062 END TEST accel_xor 00:06:38.062 ************************************ 00:06:38.062 20:42:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.062 20:42:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:38.062 20:42:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.062 20:42:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.062 20:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.062 ************************************ 00:06:38.062 START TEST accel_xor 00:06:38.062 ************************************ 00:06:38.062 20:42:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.062 [2024-07-15 20:42:41.652718] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:38.062 [2024-07-15 20:42:41.652810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383411 ] 00:06:38.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.062 [2024-07-15 20:42:41.715389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.062 [2024-07-15 20:42:41.784626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.062 20:42:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:39.447 20:42:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.447 00:06:39.447 real 0m1.291s 00:06:39.447 user 0m1.196s 00:06:39.447 sys 0m0.106s 00:06:39.447 20:42:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.447 20:42:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:39.447 ************************************ 00:06:39.447 END TEST accel_xor 00:06:39.447 ************************************ 00:06:39.447 20:42:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.447 20:42:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:39.447 20:42:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.447 20:42:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.447 20:42:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.447 ************************************ 00:06:39.447 START TEST accel_dif_verify 00:06:39.447 ************************************ 00:06:39.447 20:42:42 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:39.447 20:42:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:39.447 [2024-07-15 20:42:43.018635] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:39.447 [2024-07-15 20:42:43.018711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383739 ] 00:06:39.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.447 [2024-07-15 20:42:43.098930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.447 [2024-07-15 20:42:43.170427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.447 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:39.448 20:42:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:40.832 20:42:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.832 00:06:40.832 real 0m1.309s 00:06:40.832 user 0m1.203s 00:06:40.832 sys 0m0.120s 00:06:40.832 20:42:44 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.832 20:42:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 ************************************ 00:06:40.832 END TEST accel_dif_verify 00:06:40.832 ************************************ 00:06:40.832 20:42:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.832 20:42:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:40.832 20:42:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:40.832 20:42:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.832 20:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 ************************************ 00:06:40.832 START TEST accel_dif_generate 00:06:40.832 ************************************ 00:06:40.832 20:42:44 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:40.832 [2024-07-15 20:42:44.406671] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:40.832 [2024-07-15 20:42:44.406742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383946 ] 00:06:40.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.832 [2024-07-15 20:42:44.469784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.832 [2024-07-15 20:42:44.539790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.832 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:40.833 20:42:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:41.844 20:42:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.844 00:06:41.844 real 0m1.292s 00:06:41.844 user 0m1.200s 00:06:41.844 sys 0m0.105s 00:06:41.844 20:42:45 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.844 20:42:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:41.844 ************************************ 00:06:41.844 END TEST accel_dif_generate 00:06:41.844 ************************************ 00:06:41.844 20:42:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.844 20:42:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:41.844 20:42:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:41.844 20:42:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.844 20:42:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.128 ************************************ 00:06:42.128 START TEST accel_dif_generate_copy 00:06:42.128 ************************************ 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.128 [2024-07-15 20:42:45.774799] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:42.128 [2024-07-15 20:42:45.774869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384156 ] 00:06:42.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.128 [2024-07-15 20:42:45.838941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.128 [2024-07-15 20:42:45.912321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.128 20:42:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.513 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.514 00:06:43.514 real 0m1.296s 00:06:43.514 user 0m1.205s 00:06:43.514 sys 0m0.104s 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.514 20:42:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.514 ************************************ 00:06:43.514 END TEST accel_dif_generate_copy 00:06:43.514 ************************************ 00:06:43.514 20:42:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.514 20:42:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:43.514 20:42:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.514 20:42:47 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:43.514 20:42:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.514 20:42:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.514 ************************************ 00:06:43.514 START TEST accel_comp 00:06:43.514 ************************************ 00:06:43.514 20:42:47 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:43.514 [2024-07-15 20:42:47.147396] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:43.514 [2024-07-15 20:42:47.147458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384500 ] 00:06:43.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.514 [2024-07-15 20:42:47.207784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.514 [2024-07-15 20:42:47.273095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.514 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:43.515 20:42:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:44.899 20:42:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.899 00:06:44.899 real 0m1.285s 00:06:44.899 user 0m1.198s 00:06:44.899 sys 0m0.100s 00:06:44.899 20:42:48 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.899 20:42:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:44.899 ************************************ 00:06:44.899 END TEST accel_comp 00:06:44.899 ************************************ 00:06:44.899 20:42:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.899 20:42:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.899 20:42:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.899 20:42:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.899 20:42:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.899 ************************************ 00:06:44.899 START TEST accel_decomp 00:06:44.899 ************************************ 00:06:44.899 20:42:48 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.899 20:42:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:44.900 [2024-07-15 20:42:48.508722] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:44.900 [2024-07-15 20:42:48.508790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384857 ] 00:06:44.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.900 [2024-07-15 20:42:48.570141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.900 [2024-07-15 20:42:48.636537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:44.900 20:42:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.282 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.283 20:42:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.283 20:42:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.283 00:06:46.283 real 0m1.287s 00:06:46.283 user 0m1.199s 00:06:46.283 sys 0m0.100s 00:06:46.283 20:42:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.283 20:42:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:46.283 ************************************ 00:06:46.283 END TEST accel_decomp 00:06:46.283 ************************************ 00:06:46.283 20:42:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.283 20:42:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.283 20:42:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:46.283 20:42:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.283 20:42:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.283 ************************************ 00:06:46.283 START TEST accel_decomp_full 00:06:46.283 ************************************ 00:06:46.283 20:42:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:46.283 20:42:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:46.283 [2024-07-15 20:42:49.873507] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:46.283 [2024-07-15 20:42:49.873568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385204 ] 00:06:46.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.283 [2024-07-15 20:42:49.934594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.283 [2024-07-15 20:42:50.003248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:46.283 20:42:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.666 20:42:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.666 00:06:47.666 real 0m1.304s 00:06:47.666 user 0m1.215s 00:06:47.666 sys 0m0.102s 00:06:47.666 20:42:51 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.666 20:42:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:47.666 ************************************ 00:06:47.666 END TEST accel_decomp_full 00:06:47.666 ************************************ 00:06:47.666 20:42:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.666 20:42:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.666 20:42:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:47.666 20:42:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.666 20:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.666 ************************************ 00:06:47.666 START TEST accel_decomp_mcore 00:06:47.666 ************************************ 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:47.666 [2024-07-15 20:42:51.252703] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:47.666 [2024-07-15 20:42:51.252793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385407 ] 00:06:47.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.666 [2024-07-15 20:42:51.315435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.666 [2024-07-15 20:42:51.386826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.666 [2024-07-15 20:42:51.386957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.666 [2024-07-15 20:42:51.387115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.666 [2024-07-15 20:42:51.387115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.666 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.667 20:42:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.050 00:06:49.050 real 0m1.302s 00:06:49.050 user 0m4.443s 00:06:49.050 sys 0m0.109s 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.050 20:42:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:49.050 ************************************ 00:06:49.050 END TEST accel_decomp_mcore 00:06:49.050 ************************************ 00:06:49.050 20:42:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.050 20:42:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.050 20:42:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:49.050 20:42:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.050 20:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.050 ************************************ 00:06:49.050 START TEST accel_decomp_full_mcore 00:06:49.050 ************************************ 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:49.050 [2024-07-15 20:42:52.632602] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:49.050 [2024-07-15 20:42:52.632681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385608 ] 00:06:49.050 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.050 [2024-07-15 20:42:52.695452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.050 [2024-07-15 20:42:52.765203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.050 [2024-07-15 20:42:52.765476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.050 [2024-07-15 20:42:52.765634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.050 [2024-07-15 20:42:52.765634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.050 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.051 20:42:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.434 00:06:50.434 real 0m1.311s 00:06:50.434 user 0m4.474s 00:06:50.434 sys 0m0.116s 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.434 20:42:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:50.434 ************************************ 00:06:50.434 END TEST accel_decomp_full_mcore 00:06:50.434 ************************************ 00:06:50.434 20:42:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.435 20:42:53 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.435 20:42:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:50.435 20:42:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.435 20:42:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.435 ************************************ 00:06:50.435 START TEST accel_decomp_mthread 00:06:50.435 ************************************ 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:50.435 20:42:53 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:50.435 [2024-07-15 20:42:54.022061] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:50.435 [2024-07-15 20:42:54.022172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385947 ] 00:06:50.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.435 [2024-07-15 20:42:54.083826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.435 [2024-07-15 20:42:54.152812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.435 20:42:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.855 00:06:51.855 real 0m1.296s 00:06:51.855 user 0m1.201s 00:06:51.855 sys 0m0.107s 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.855 20:42:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.855 ************************************ 00:06:51.855 END TEST accel_decomp_mthread 00:06:51.855 ************************************ 00:06:51.855 20:42:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.855 20:42:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.855 20:42:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:51.855 20:42:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.855 20:42:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.855 ************************************ 00:06:51.855 START TEST accel_decomp_full_mthread 00:06:51.855 ************************************ 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:51.855 [2024-07-15 20:42:55.393952] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:51.855 [2024-07-15 20:42:55.394048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386294 ] 00:06:51.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.855 [2024-07-15 20:42:55.456688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.855 [2024-07-15 20:42:55.521149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.855 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:51.856 20:42:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.797 00:06:52.797 real 0m1.319s 00:06:52.797 user 0m1.226s 00:06:52.797 sys 0m0.104s 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.797 20:42:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:52.797 ************************************ 00:06:52.797 END TEST accel_decomp_full_mthread 00:06:52.797 ************************************ 00:06:53.058 20:42:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.058 20:42:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:53.058 20:42:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.058 20:42:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:53.058 20:42:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:53.058 20:42:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.058 20:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.058 20:42:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.058 20:42:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.058 20:42:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.058 20:42:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.058 20:42:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.058 20:42:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.058 20:42:56 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.058 ************************************ 00:06:53.058 START TEST accel_dif_functional_tests 00:06:53.058 ************************************ 00:06:53.058 20:42:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:53.058 [2024-07-15 20:42:56.812275] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:53.058 [2024-07-15 20:42:56.812335] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386650 ] 00:06:53.058 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.058 [2024-07-15 20:42:56.874439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.058 [2024-07-15 20:42:56.949638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.058 [2024-07-15 20:42:56.949755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.058 [2024-07-15 20:42:56.949758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.319 00:06:53.319 00:06:53.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.319 http://cunit.sourceforge.net/ 00:06:53.319 00:06:53.319 00:06:53.319 Suite: accel_dif 00:06:53.319 Test: verify: DIF generated, GUARD check ...passed 00:06:53.319 Test: verify: DIF generated, APPTAG check ...passed 00:06:53.319 Test: verify: DIF generated, REFTAG check ...passed 00:06:53.319 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:42:57.005893] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.319 passed 00:06:53.319 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:42:57.005937] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.319 passed 00:06:53.319 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:42:57.005959] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.319 passed 00:06:53.319 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:53.319 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:42:57.006010] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:53.319 passed 00:06:53.319 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:53.319 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:53.319 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:53.319 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:42:57.006121] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:53.319 passed 00:06:53.319 Test: verify copy: DIF generated, GUARD check ...passed 00:06:53.319 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:53.319 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:53.319 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:42:57.006246] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:53.319 passed 00:06:53.319 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:42:57.006270] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:53.319 passed 00:06:53.319 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:42:57.006291] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:53.319 passed 00:06:53.319 Test: generate copy: DIF generated, GUARD check ...passed 00:06:53.319 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:53.319 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:53.319 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:53.319 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:53.319 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:53.319 Test: generate copy: iovecs-len validate ...[2024-07-15 20:42:57.006476] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:53.319 passed 00:06:53.319 Test: generate copy: buffer alignment validate ...passed 00:06:53.319 00:06:53.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.319 suites 1 1 n/a 0 0 00:06:53.319 tests 26 26 26 0 0 00:06:53.319 asserts 115 115 115 0 n/a 00:06:53.319 00:06:53.319 Elapsed time = 0.002 seconds 00:06:53.319 00:06:53.319 real 0m0.360s 00:06:53.319 user 0m0.489s 00:06:53.319 sys 0m0.137s 00:06:53.319 20:42:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.319 20:42:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:53.319 ************************************ 00:06:53.319 END TEST accel_dif_functional_tests 00:06:53.319 ************************************ 00:06:53.319 20:42:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.319 00:06:53.319 real 0m30.113s 00:06:53.319 user 0m33.756s 00:06:53.320 sys 0m4.109s 00:06:53.320 20:42:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.320 20:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.320 ************************************ 00:06:53.320 END TEST accel 00:06:53.320 ************************************ 00:06:53.320 20:42:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.320 20:42:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:53.320 20:42:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.320 20:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.320 20:42:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.581 ************************************ 00:06:53.581 START TEST accel_rpc 00:06:53.581 ************************************ 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:53.581 * Looking for test storage... 00:06:53.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:53.581 20:42:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.581 20:42:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1386720 00:06:53.581 20:42:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1386720 00:06:53.581 20:42:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1386720 ']' 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.581 20:42:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.581 [2024-07-15 20:42:57.396695] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:53.581 [2024-07-15 20:42:57.396762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386720 ] 00:06:53.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.581 [2024-07-15 20:42:57.462619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.842 [2024-07-15 20:42:57.536085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.413 20:42:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.413 20:42:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:54.413 20:42:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:54.413 20:42:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:54.413 20:42:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:54.413 20:42:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:54.413 20:42:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:54.413 20:42:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.413 20:42:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.413 20:42:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.413 ************************************ 00:06:54.413 START TEST accel_assign_opcode 00:06:54.413 ************************************ 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.413 [2024-07-15 20:42:58.214062] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.413 [2024-07-15 20:42:58.226086] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.413 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.673 software 00:06:54.673 00:06:54.673 real 0m0.220s 00:06:54.673 user 0m0.053s 00:06:54.673 sys 0m0.008s 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.673 20:42:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:54.673 ************************************ 00:06:54.673 END TEST accel_assign_opcode 00:06:54.673 ************************************ 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:54.673 20:42:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1386720 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1386720 ']' 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1386720 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1386720 00:06:54.673 20:42:58 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.674 20:42:58 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.674 20:42:58 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1386720' 00:06:54.674 killing process with pid 1386720 00:06:54.674 20:42:58 accel_rpc -- common/autotest_common.sh@967 -- # kill 1386720 00:06:54.674 20:42:58 accel_rpc -- common/autotest_common.sh@972 -- # wait 1386720 00:06:54.934 00:06:54.934 real 0m1.498s 00:06:54.934 user 0m1.587s 00:06:54.934 sys 0m0.407s 00:06:54.934 20:42:58 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.934 20:42:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.934 ************************************ 00:06:54.934 END TEST accel_rpc 00:06:54.934 ************************************ 00:06:54.934 20:42:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.934 20:42:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:54.934 20:42:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.934 20:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.934 20:42:58 -- common/autotest_common.sh@10 -- # set +x 00:06:54.934 ************************************ 00:06:54.934 START TEST app_cmdline 00:06:54.934 ************************************ 00:06:54.934 20:42:58 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:55.194 * Looking for test storage... 00:06:55.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:55.194 20:42:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.194 20:42:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1387128 00:06:55.194 20:42:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1387128 00:06:55.194 20:42:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1387128 ']' 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.194 20:42:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.194 [2024-07-15 20:42:58.960760] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:06:55.194 [2024-07-15 20:42:58.960819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387128 ] 00:06:55.194 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.194 [2024-07-15 20:42:59.021177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.454 [2024-07-15 20:42:59.089143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:56.024 { 00:06:56.024 "version": "SPDK v24.09-pre git sha1 06cc9fb0c", 00:06:56.024 "fields": { 00:06:56.024 "major": 24, 00:06:56.024 "minor": 9, 00:06:56.024 "patch": 0, 00:06:56.024 "suffix": "-pre", 00:06:56.024 "commit": "06cc9fb0c" 00:06:56.024 } 00:06:56.024 } 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.024 20:42:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:56.024 20:42:59 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.285 request: 00:06:56.285 { 00:06:56.285 "method": "env_dpdk_get_mem_stats", 00:06:56.285 "req_id": 1 00:06:56.285 } 00:06:56.285 Got JSON-RPC error response 00:06:56.285 response: 00:06:56.285 { 00:06:56.285 "code": -32601, 00:06:56.285 "message": "Method not found" 00:06:56.285 } 00:06:56.285 20:43:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:56.285 20:43:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.285 20:43:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.285 20:43:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.285 20:43:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1387128 00:06:56.285 20:43:00 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1387128 ']' 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1387128 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1387128 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1387128' 00:06:56.286 killing process with pid 1387128 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@967 -- # kill 1387128 00:06:56.286 20:43:00 app_cmdline -- common/autotest_common.sh@972 -- # wait 1387128 00:06:56.547 00:06:56.548 real 0m1.518s 00:06:56.548 user 0m1.811s 00:06:56.548 sys 0m0.386s 00:06:56.548 20:43:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.548 20:43:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.548 ************************************ 00:06:56.548 END TEST app_cmdline 00:06:56.548 ************************************ 00:06:56.548 20:43:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.548 20:43:00 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:56.548 20:43:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.548 20:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.548 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:56.548 ************************************ 00:06:56.548 START TEST version 00:06:56.548 ************************************ 00:06:56.548 20:43:00 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:56.809 * Looking for test storage... 00:06:56.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.809 20:43:00 version -- app/version.sh@17 -- # get_header_version major 00:06:56.809 20:43:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # cut -f2 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.809 20:43:00 version -- app/version.sh@17 -- # major=24 00:06:56.809 20:43:00 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.809 20:43:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # cut -f2 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.809 20:43:00 version -- app/version.sh@18 -- # minor=9 00:06:56.809 20:43:00 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.809 20:43:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # cut -f2 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.809 20:43:00 version -- app/version.sh@19 -- # patch=0 00:06:56.809 20:43:00 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.809 20:43:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # cut -f2 00:06:56.809 20:43:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.809 20:43:00 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.809 20:43:00 version -- app/version.sh@22 -- # version=24.9 00:06:56.809 20:43:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.809 20:43:00 version -- app/version.sh@28 -- # version=24.9rc0 00:06:56.809 20:43:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:56.809 20:43:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.809 20:43:00 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:56.809 20:43:00 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:56.809 00:06:56.809 real 0m0.152s 00:06:56.809 user 0m0.078s 00:06:56.809 sys 0m0.108s 00:06:56.809 20:43:00 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.809 20:43:00 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.809 ************************************ 00:06:56.809 END TEST version 00:06:56.809 ************************************ 00:06:56.809 20:43:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.809 20:43:00 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@198 -- # uname -s 00:06:56.809 20:43:00 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:56.809 20:43:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:56.809 20:43:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:56.809 20:43:00 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:56.809 20:43:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:56.809 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:56.809 20:43:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:56.809 20:43:00 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:56.809 20:43:00 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:56.809 20:43:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:56.809 20:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.809 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:56.809 ************************************ 00:06:56.809 START TEST nvmf_tcp 00:06:56.809 ************************************ 00:06:56.809 20:43:00 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:57.070 * Looking for test storage... 00:06:57.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.071 20:43:00 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.071 20:43:00 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.071 20:43:00 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.071 20:43:00 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:57.071 20:43:00 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:57.071 20:43:00 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.071 20:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:57.071 20:43:00 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:57.071 20:43:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.071 20:43:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.071 20:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.071 ************************************ 00:06:57.071 START TEST nvmf_example 00:06:57.071 ************************************ 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:57.071 * Looking for test storage... 00:06:57.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.071 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.332 20:43:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:03.917 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:03.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:03.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:03.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:03.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:07:03.917 00:07:03.917 --- 10.0.0.2 ping statistics --- 00:07:03.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.917 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:07:03.917 00:07:03.917 --- 10.0.0.1 ping statistics --- 00:07:03.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.917 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1391219 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1391219 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1391219 ']' 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.917 20:43:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:03.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:04.897 20:43:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:04.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.896 Initializing NVMe Controllers 00:07:14.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:14.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:14.896 Initialization complete. Launching workers. 00:07:14.896 ======================================================== 00:07:14.896 Latency(us) 00:07:14.896 Device Information : IOPS MiB/s Average min max 00:07:14.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18128.87 70.82 3530.18 860.32 16293.91 00:07:14.896 ======================================================== 00:07:14.897 Total : 18128.87 70.82 3530.18 860.32 16293.91 00:07:14.897 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.897 rmmod nvme_tcp 00:07:14.897 rmmod nvme_fabrics 00:07:14.897 rmmod nvme_keyring 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1391219 ']' 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1391219 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1391219 ']' 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1391219 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.897 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1391219 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1391219' 00:07:15.156 killing process with pid 1391219 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1391219 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1391219 00:07:15.156 nvmf threads initialize successfully 00:07:15.156 bdev subsystem init successfully 00:07:15.156 created a nvmf target service 00:07:15.156 create targets's poll groups done 00:07:15.156 all subsystems of target started 00:07:15.156 nvmf target is running 00:07:15.156 all subsystems of target stopped 00:07:15.156 destroy targets's poll groups done 00:07:15.156 destroyed the nvmf target service 00:07:15.156 bdev subsystem finish successfully 00:07:15.156 nvmf threads destroy successfully 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.156 20:43:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.703 00:07:17.703 real 0m20.214s 00:07:17.703 user 0m42.918s 00:07:17.703 sys 0m7.273s 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.703 20:43:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.703 ************************************ 00:07:17.703 END TEST nvmf_example 00:07:17.703 ************************************ 00:07:17.703 20:43:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:17.703 20:43:21 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:17.703 20:43:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.703 20:43:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.703 20:43:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.703 ************************************ 00:07:17.703 START TEST nvmf_filesystem 00:07:17.703 ************************************ 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:17.703 * Looking for test storage... 00:07:17.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:17.703 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:17.704 #define SPDK_CONFIG_H 00:07:17.704 #define SPDK_CONFIG_APPS 1 00:07:17.704 #define SPDK_CONFIG_ARCH native 00:07:17.704 #undef SPDK_CONFIG_ASAN 00:07:17.704 #undef SPDK_CONFIG_AVAHI 00:07:17.704 #undef SPDK_CONFIG_CET 00:07:17.704 #define SPDK_CONFIG_COVERAGE 1 00:07:17.704 #define SPDK_CONFIG_CROSS_PREFIX 00:07:17.704 #undef SPDK_CONFIG_CRYPTO 00:07:17.704 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:17.704 #undef SPDK_CONFIG_CUSTOMOCF 00:07:17.704 #undef SPDK_CONFIG_DAOS 00:07:17.704 #define SPDK_CONFIG_DAOS_DIR 00:07:17.704 #define SPDK_CONFIG_DEBUG 1 00:07:17.704 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:17.704 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:17.704 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:17.704 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:17.704 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:17.704 #undef SPDK_CONFIG_DPDK_UADK 00:07:17.704 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:17.704 #define SPDK_CONFIG_EXAMPLES 1 00:07:17.704 #undef SPDK_CONFIG_FC 00:07:17.704 #define SPDK_CONFIG_FC_PATH 00:07:17.704 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:17.704 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:17.704 #undef SPDK_CONFIG_FUSE 00:07:17.704 #undef SPDK_CONFIG_FUZZER 00:07:17.704 #define SPDK_CONFIG_FUZZER_LIB 00:07:17.704 #undef SPDK_CONFIG_GOLANG 00:07:17.704 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:17.704 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:17.704 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:17.704 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:17.704 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:17.704 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:17.704 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:17.704 #define SPDK_CONFIG_IDXD 1 00:07:17.704 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:17.704 #undef SPDK_CONFIG_IPSEC_MB 00:07:17.704 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:17.704 #define SPDK_CONFIG_ISAL 1 00:07:17.704 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:17.704 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:17.704 #define SPDK_CONFIG_LIBDIR 00:07:17.704 #undef SPDK_CONFIG_LTO 00:07:17.704 #define SPDK_CONFIG_MAX_LCORES 128 00:07:17.704 #define SPDK_CONFIG_NVME_CUSE 1 00:07:17.704 #undef SPDK_CONFIG_OCF 00:07:17.704 #define SPDK_CONFIG_OCF_PATH 00:07:17.704 #define SPDK_CONFIG_OPENSSL_PATH 00:07:17.704 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:17.704 #define SPDK_CONFIG_PGO_DIR 00:07:17.704 #undef SPDK_CONFIG_PGO_USE 00:07:17.704 #define SPDK_CONFIG_PREFIX /usr/local 00:07:17.704 #undef SPDK_CONFIG_RAID5F 00:07:17.704 #undef SPDK_CONFIG_RBD 00:07:17.704 #define SPDK_CONFIG_RDMA 1 00:07:17.704 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:17.704 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:17.704 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:17.704 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:17.704 #define SPDK_CONFIG_SHARED 1 00:07:17.704 #undef SPDK_CONFIG_SMA 00:07:17.704 #define SPDK_CONFIG_TESTS 1 00:07:17.704 #undef SPDK_CONFIG_TSAN 00:07:17.704 #define SPDK_CONFIG_UBLK 1 00:07:17.704 #define SPDK_CONFIG_UBSAN 1 00:07:17.704 #undef SPDK_CONFIG_UNIT_TESTS 00:07:17.704 #undef SPDK_CONFIG_URING 00:07:17.704 #define SPDK_CONFIG_URING_PATH 00:07:17.704 #undef SPDK_CONFIG_URING_ZNS 00:07:17.704 #undef SPDK_CONFIG_USDT 00:07:17.704 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:17.704 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:17.704 #define SPDK_CONFIG_VFIO_USER 1 00:07:17.704 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:17.704 #define SPDK_CONFIG_VHOST 1 00:07:17.704 #define SPDK_CONFIG_VIRTIO 1 00:07:17.704 #undef SPDK_CONFIG_VTUNE 00:07:17.704 #define SPDK_CONFIG_VTUNE_DIR 00:07:17.704 #define SPDK_CONFIG_WERROR 1 00:07:17.704 #define SPDK_CONFIG_WPDK_DIR 00:07:17.704 #undef SPDK_CONFIG_XNVME 00:07:17.704 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:17.704 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:17.705 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:17.706 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1394028 ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1394028 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.uqBNpf 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uqBNpf/tests/target /tmp/spdk.uqBNpf 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118636593152 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10734419968 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684060672 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1445888 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:17.707 * Looking for test storage... 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118636593152 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12949012480 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.707 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.708 20:43:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:25.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:25.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:25.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:25.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.848 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:07:25.849 00:07:25.849 --- 10.0.0.2 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:07:25.849 00:07:25.849 --- 10.0.0.1 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 ************************************ 00:07:25.849 START TEST nvmf_filesystem_no_in_capsule 00:07:25.849 ************************************ 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1397676 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1397676 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1397676 ']' 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.849 20:43:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-07-15 20:43:28.670989] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:07:25.849 [2024-07-15 20:43:28.671035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.849 [2024-07-15 20:43:28.733393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.849 [2024-07-15 20:43:28.803132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.849 [2024-07-15 20:43:28.803169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.849 [2024-07-15 20:43:28.803176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.849 [2024-07-15 20:43:28.803183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.849 [2024-07-15 20:43:28.803188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.849 [2024-07-15 20:43:28.803396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.849 [2024-07-15 20:43:28.803572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.849 [2024-07-15 20:43:28.803706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.849 [2024-07-15 20:43:28.803710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-07-15 20:43:29.519867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-07-15 20:43:29.629525] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:25.849 { 00:07:25.849 "name": "Malloc1", 00:07:25.849 "aliases": [ 00:07:25.849 "1a678c1b-534b-4c76-9629-9b263bf629ff" 00:07:25.849 ], 00:07:25.849 "product_name": "Malloc disk", 00:07:25.849 "block_size": 512, 00:07:25.849 "num_blocks": 1048576, 00:07:25.849 "uuid": "1a678c1b-534b-4c76-9629-9b263bf629ff", 00:07:25.849 "assigned_rate_limits": { 00:07:25.849 "rw_ios_per_sec": 0, 00:07:25.849 "rw_mbytes_per_sec": 0, 00:07:25.849 "r_mbytes_per_sec": 0, 00:07:25.849 "w_mbytes_per_sec": 0 00:07:25.849 }, 00:07:25.849 "claimed": true, 00:07:25.849 "claim_type": "exclusive_write", 00:07:25.849 "zoned": false, 00:07:25.849 "supported_io_types": { 00:07:25.849 "read": true, 00:07:25.849 "write": true, 00:07:25.849 "unmap": true, 00:07:25.849 "flush": true, 00:07:25.849 "reset": true, 00:07:25.849 "nvme_admin": false, 00:07:25.849 "nvme_io": false, 00:07:25.849 "nvme_io_md": false, 00:07:25.849 "write_zeroes": true, 00:07:25.849 "zcopy": true, 00:07:25.849 "get_zone_info": false, 00:07:25.849 "zone_management": false, 00:07:25.849 "zone_append": false, 00:07:25.849 "compare": false, 00:07:25.849 "compare_and_write": false, 00:07:25.849 "abort": true, 00:07:25.849 "seek_hole": false, 00:07:25.849 "seek_data": false, 00:07:25.849 "copy": true, 00:07:25.849 "nvme_iov_md": false 00:07:25.849 }, 00:07:25.849 "memory_domains": [ 00:07:25.849 { 00:07:25.849 "dma_device_id": "system", 00:07:25.849 "dma_device_type": 1 00:07:25.849 }, 00:07:25.849 { 00:07:25.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.849 "dma_device_type": 2 00:07:25.849 } 00:07:25.849 ], 00:07:25.849 "driver_specific": {} 00:07:25.849 } 00:07:25.849 ]' 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:25.849 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:26.110 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:26.110 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:26.110 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:26.110 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:26.110 20:43:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.490 20:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.490 20:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:27.490 20:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.490 20:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:27.490 20:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:29.402 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:29.661 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:29.661 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:29.661 20:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:30.231 20:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.613 ************************************ 00:07:31.613 START TEST filesystem_ext4 00:07:31.613 ************************************ 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:31.613 20:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:31.613 mke2fs 1.46.5 (30-Dec-2021) 00:07:31.613 Discarding device blocks: 0/522240 done 00:07:31.613 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:31.613 Filesystem UUID: 8c53b6f2-4f31-4187-ad98-66843cb4daba 00:07:31.613 Superblock backups stored on blocks: 00:07:31.613 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:31.613 00:07:31.613 Allocating group tables: 0/64 done 00:07:31.613 Writing inode tables: 0/64 done 00:07:31.613 Creating journal (8192 blocks): done 00:07:32.702 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:32.702 00:07:32.702 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:32.702 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.702 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1397676 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.962 00:07:32.962 real 0m1.571s 00:07:32.962 user 0m0.035s 00:07:32.962 sys 0m0.063s 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 ************************************ 00:07:32.962 END TEST filesystem_ext4 00:07:32.962 ************************************ 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 ************************************ 00:07:32.962 START TEST filesystem_btrfs 00:07:32.962 ************************************ 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:32.962 20:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:33.532 btrfs-progs v6.6.2 00:07:33.532 See https://btrfs.readthedocs.io for more information. 00:07:33.532 00:07:33.532 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:33.532 NOTE: several default settings have changed in version 5.15, please make sure 00:07:33.532 this does not affect your deployments: 00:07:33.532 - DUP for metadata (-m dup) 00:07:33.532 - enabled no-holes (-O no-holes) 00:07:33.532 - enabled free-space-tree (-R free-space-tree) 00:07:33.532 00:07:33.532 Label: (null) 00:07:33.532 UUID: 0d9945ce-7a47-48e6-91c8-bce089ac1739 00:07:33.532 Node size: 16384 00:07:33.532 Sector size: 4096 00:07:33.532 Filesystem size: 510.00MiB 00:07:33.532 Block group profiles: 00:07:33.532 Data: single 8.00MiB 00:07:33.532 Metadata: DUP 32.00MiB 00:07:33.532 System: DUP 8.00MiB 00:07:33.532 SSD detected: yes 00:07:33.532 Zoned device: no 00:07:33.532 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:33.532 Runtime features: free-space-tree 00:07:33.532 Checksum: crc32c 00:07:33.532 Number of devices: 1 00:07:33.532 Devices: 00:07:33.532 ID SIZE PATH 00:07:33.532 1 510.00MiB /dev/nvme0n1p1 00:07:33.532 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:33.532 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1397676 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.792 00:07:33.792 real 0m0.704s 00:07:33.792 user 0m0.024s 00:07:33.792 sys 0m0.134s 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.792 ************************************ 00:07:33.792 END TEST filesystem_btrfs 00:07:33.792 ************************************ 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.792 ************************************ 00:07:33.792 START TEST filesystem_xfs 00:07:33.792 ************************************ 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:33.792 20:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:33.792 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:33.792 = sectsz=512 attr=2, projid32bit=1 00:07:33.792 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:33.792 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:33.792 data = bsize=4096 blocks=130560, imaxpct=25 00:07:33.792 = sunit=0 swidth=0 blks 00:07:33.792 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:33.792 log =internal log bsize=4096 blocks=16384, version=2 00:07:33.792 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:33.792 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:35.172 Discarding blocks...Done. 00:07:35.172 20:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:35.172 20:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:36.629 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1397676 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.889 00:07:36.889 real 0m2.994s 00:07:36.889 user 0m0.025s 00:07:36.889 sys 0m0.076s 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:36.889 ************************************ 00:07:36.889 END TEST filesystem_xfs 00:07:36.889 ************************************ 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:36.889 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:37.157 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:37.157 20:43:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.157 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1397676 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1397676 ']' 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1397676 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1397676 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1397676' 00:07:37.417 killing process with pid 1397676 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1397676 00:07:37.417 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1397676 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:37.677 00:07:37.677 real 0m12.712s 00:07:37.677 user 0m50.226s 00:07:37.677 sys 0m1.171s 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.677 ************************************ 00:07:37.677 END TEST nvmf_filesystem_no_in_capsule 00:07:37.677 ************************************ 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.677 ************************************ 00:07:37.677 START TEST nvmf_filesystem_in_capsule 00:07:37.677 ************************************ 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1400547 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1400547 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1400547 ']' 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.677 20:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.677 [2024-07-15 20:43:41.484476] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:07:37.677 [2024-07-15 20:43:41.484522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.677 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.677 [2024-07-15 20:43:41.551382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.937 [2024-07-15 20:43:41.616739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.937 [2024-07-15 20:43:41.616776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.937 [2024-07-15 20:43:41.616784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.937 [2024-07-15 20:43:41.616790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.937 [2024-07-15 20:43:41.616796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.937 [2024-07-15 20:43:41.616843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.937 [2024-07-15 20:43:41.616930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.937 [2024-07-15 20:43:41.617071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.937 [2024-07-15 20:43:41.617072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.507 [2024-07-15 20:43:42.304795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.507 Malloc1 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.507 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.767 [2024-07-15 20:43:42.416376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.767 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:38.768 { 00:07:38.768 "name": "Malloc1", 00:07:38.768 "aliases": [ 00:07:38.768 "d483b3c8-8ad4-4ee6-8e34-2f26c333ac70" 00:07:38.768 ], 00:07:38.768 "product_name": "Malloc disk", 00:07:38.768 "block_size": 512, 00:07:38.768 "num_blocks": 1048576, 00:07:38.768 "uuid": "d483b3c8-8ad4-4ee6-8e34-2f26c333ac70", 00:07:38.768 "assigned_rate_limits": { 00:07:38.768 "rw_ios_per_sec": 0, 00:07:38.768 "rw_mbytes_per_sec": 0, 00:07:38.768 "r_mbytes_per_sec": 0, 00:07:38.768 "w_mbytes_per_sec": 0 00:07:38.768 }, 00:07:38.768 "claimed": true, 00:07:38.768 "claim_type": "exclusive_write", 00:07:38.768 "zoned": false, 00:07:38.768 "supported_io_types": { 00:07:38.768 "read": true, 00:07:38.768 "write": true, 00:07:38.768 "unmap": true, 00:07:38.768 "flush": true, 00:07:38.768 "reset": true, 00:07:38.768 "nvme_admin": false, 00:07:38.768 "nvme_io": false, 00:07:38.768 "nvme_io_md": false, 00:07:38.768 "write_zeroes": true, 00:07:38.768 "zcopy": true, 00:07:38.768 "get_zone_info": false, 00:07:38.768 "zone_management": false, 00:07:38.768 "zone_append": false, 00:07:38.768 "compare": false, 00:07:38.768 "compare_and_write": false, 00:07:38.768 "abort": true, 00:07:38.768 "seek_hole": false, 00:07:38.768 "seek_data": false, 00:07:38.768 "copy": true, 00:07:38.768 "nvme_iov_md": false 00:07:38.768 }, 00:07:38.768 "memory_domains": [ 00:07:38.768 { 00:07:38.768 "dma_device_id": "system", 00:07:38.768 "dma_device_type": 1 00:07:38.768 }, 00:07:38.768 { 00:07:38.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.768 "dma_device_type": 2 00:07:38.768 } 00:07:38.768 ], 00:07:38.768 "driver_specific": {} 00:07:38.768 } 00:07:38.768 ]' 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:38.768 20:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.678 20:43:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.678 20:43:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:40.678 20:43:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.678 20:43:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:40.678 20:43:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:42.589 20:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.972 ************************************ 00:07:43.972 START TEST filesystem_in_capsule_ext4 00:07:43.972 ************************************ 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:43.972 20:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:43.972 mke2fs 1.46.5 (30-Dec-2021) 00:07:43.972 Discarding device blocks: 0/522240 done 00:07:43.972 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:43.972 Filesystem UUID: d8fc2c98-d9e1-4ef6-9122-e06ce8c4d20d 00:07:43.972 Superblock backups stored on blocks: 00:07:43.972 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:43.972 00:07:43.972 Allocating group tables: 0/64 done 00:07:43.972 Writing inode tables: 0/64 done 00:07:43.972 Creating journal (8192 blocks): done 00:07:45.064 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:45.064 00:07:45.064 20:43:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:45.064 20:43:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.325 20:43:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1400547 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.325 00:07:45.325 real 0m1.573s 00:07:45.325 user 0m0.013s 00:07:45.325 sys 0m0.080s 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 ************************************ 00:07:45.325 END TEST filesystem_in_capsule_ext4 00:07:45.325 ************************************ 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 ************************************ 00:07:45.325 START TEST filesystem_in_capsule_btrfs 00:07:45.325 ************************************ 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:45.325 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:45.896 btrfs-progs v6.6.2 00:07:45.896 See https://btrfs.readthedocs.io for more information. 00:07:45.896 00:07:45.896 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:45.896 NOTE: several default settings have changed in version 5.15, please make sure 00:07:45.896 this does not affect your deployments: 00:07:45.896 - DUP for metadata (-m dup) 00:07:45.896 - enabled no-holes (-O no-holes) 00:07:45.896 - enabled free-space-tree (-R free-space-tree) 00:07:45.896 00:07:45.896 Label: (null) 00:07:45.896 UUID: b25bbfad-09ac-4bee-9e4b-c4c2bf49f225 00:07:45.896 Node size: 16384 00:07:45.896 Sector size: 4096 00:07:45.896 Filesystem size: 510.00MiB 00:07:45.896 Block group profiles: 00:07:45.896 Data: single 8.00MiB 00:07:45.896 Metadata: DUP 32.00MiB 00:07:45.896 System: DUP 8.00MiB 00:07:45.896 SSD detected: yes 00:07:45.896 Zoned device: no 00:07:45.896 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:45.896 Runtime features: free-space-tree 00:07:45.896 Checksum: crc32c 00:07:45.896 Number of devices: 1 00:07:45.896 Devices: 00:07:45.896 ID SIZE PATH 00:07:45.896 1 510.00MiB /dev/nvme0n1p1 00:07:45.896 00:07:45.896 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:45.896 20:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.155 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1400547 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.416 00:07:46.416 real 0m0.955s 00:07:46.416 user 0m0.037s 00:07:46.416 sys 0m0.125s 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:46.416 ************************************ 00:07:46.416 END TEST filesystem_in_capsule_btrfs 00:07:46.416 ************************************ 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.416 ************************************ 00:07:46.416 START TEST filesystem_in_capsule_xfs 00:07:46.416 ************************************ 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:46.416 20:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:46.416 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:46.416 = sectsz=512 attr=2, projid32bit=1 00:07:46.417 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:46.417 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:46.417 data = bsize=4096 blocks=130560, imaxpct=25 00:07:46.417 = sunit=0 swidth=0 blks 00:07:46.417 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:46.417 log =internal log bsize=4096 blocks=16384, version=2 00:07:46.417 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:46.417 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:47.357 Discarding blocks...Done. 00:07:47.357 20:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:47.357 20:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1400547 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.268 20:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.268 00:07:49.268 real 0m2.804s 00:07:49.268 user 0m0.031s 00:07:49.268 sys 0m0.070s 00:07:49.268 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.268 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 ************************************ 00:07:49.268 END TEST filesystem_in_capsule_xfs 00:07:49.268 ************************************ 00:07:49.268 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:49.268 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:49.528 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:49.528 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:49.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1400547 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1400547 ']' 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1400547 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1400547 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1400547' 00:07:49.529 killing process with pid 1400547 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1400547 00:07:49.529 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1400547 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:49.789 00:07:49.789 real 0m12.175s 00:07:49.789 user 0m47.943s 00:07:49.789 sys 0m1.236s 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.789 ************************************ 00:07:49.789 END TEST nvmf_filesystem_in_capsule 00:07:49.789 ************************************ 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.789 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.789 rmmod nvme_tcp 00:07:49.789 rmmod nvme_fabrics 00:07:50.048 rmmod nvme_keyring 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.048 20:43:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.965 20:43:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.965 00:07:51.965 real 0m34.657s 00:07:51.965 user 1m40.412s 00:07:51.965 sys 0m7.870s 00:07:51.965 20:43:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.965 20:43:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.965 ************************************ 00:07:51.965 END TEST nvmf_filesystem 00:07:51.965 ************************************ 00:07:51.965 20:43:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.965 20:43:55 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.965 20:43:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.965 20:43:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.965 20:43:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.226 ************************************ 00:07:52.226 START TEST nvmf_target_discovery 00:07:52.226 ************************************ 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:52.226 * Looking for test storage... 00:07:52.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.226 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:52.227 20:43:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.227 20:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:58.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:58.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:58.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:58.816 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.816 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:07:59.076 00:07:59.076 --- 10.0.0.2 ping statistics --- 00:07:59.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.076 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:07:59.076 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:07:59.076 00:07:59.077 --- 10.0.0.1 ping statistics --- 00:07:59.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.077 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.077 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1407137 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1407137 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1407137 ']' 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.336 20:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.336 [2024-07-15 20:44:03.030114] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:07:59.336 [2024-07-15 20:44:03.030190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.336 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.336 [2024-07-15 20:44:03.101937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.336 [2024-07-15 20:44:03.177900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.336 [2024-07-15 20:44:03.177938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.336 [2024-07-15 20:44:03.177945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.336 [2024-07-15 20:44:03.177952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.336 [2024-07-15 20:44:03.177957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.336 [2024-07-15 20:44:03.178096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.336 [2024-07-15 20:44:03.178218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.336 [2024-07-15 20:44:03.178316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.336 [2024-07-15 20:44:03.178317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.959 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 [2024-07-15 20:44:03.854776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 Null1 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 [2024-07-15 20:44:03.915066] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 Null2 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 Null3 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 Null4 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.220 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:00.481 00:08:00.481 Discovery Log Number of Records 6, Generation counter 6 00:08:00.481 =====Discovery Log Entry 0====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: current discovery subsystem 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4420 00:08:00.481 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: explicit discovery connections, duplicate discovery information 00:08:00.481 sectype: none 00:08:00.481 =====Discovery Log Entry 1====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: nvme subsystem 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4420 00:08:00.481 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: none 00:08:00.481 sectype: none 00:08:00.481 =====Discovery Log Entry 2====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: nvme subsystem 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4420 00:08:00.481 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: none 00:08:00.481 sectype: none 00:08:00.481 =====Discovery Log Entry 3====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: nvme subsystem 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4420 00:08:00.481 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: none 00:08:00.481 sectype: none 00:08:00.481 =====Discovery Log Entry 4====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: nvme subsystem 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4420 00:08:00.481 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: none 00:08:00.481 sectype: none 00:08:00.481 =====Discovery Log Entry 5====== 00:08:00.481 trtype: tcp 00:08:00.481 adrfam: ipv4 00:08:00.481 subtype: discovery subsystem referral 00:08:00.481 treq: not required 00:08:00.481 portid: 0 00:08:00.481 trsvcid: 4430 00:08:00.481 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:00.481 traddr: 10.0.0.2 00:08:00.481 eflags: none 00:08:00.481 sectype: none 00:08:00.481 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:00.481 Perform nvmf subsystem discovery via RPC 00:08:00.481 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:00.481 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.481 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.481 [ 00:08:00.481 { 00:08:00.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:00.481 "subtype": "Discovery", 00:08:00.481 "listen_addresses": [ 00:08:00.481 { 00:08:00.481 "trtype": "TCP", 00:08:00.481 "adrfam": "IPv4", 00:08:00.481 "traddr": "10.0.0.2", 00:08:00.481 "trsvcid": "4420" 00:08:00.481 } 00:08:00.481 ], 00:08:00.481 "allow_any_host": true, 00:08:00.481 "hosts": [] 00:08:00.481 }, 00:08:00.481 { 00:08:00.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:00.481 "subtype": "NVMe", 00:08:00.481 "listen_addresses": [ 00:08:00.481 { 00:08:00.481 "trtype": "TCP", 00:08:00.481 "adrfam": "IPv4", 00:08:00.481 "traddr": "10.0.0.2", 00:08:00.481 "trsvcid": "4420" 00:08:00.481 } 00:08:00.481 ], 00:08:00.481 "allow_any_host": true, 00:08:00.481 "hosts": [], 00:08:00.481 "serial_number": "SPDK00000000000001", 00:08:00.481 "model_number": "SPDK bdev Controller", 00:08:00.481 "max_namespaces": 32, 00:08:00.481 "min_cntlid": 1, 00:08:00.481 "max_cntlid": 65519, 00:08:00.481 "namespaces": [ 00:08:00.481 { 00:08:00.481 "nsid": 1, 00:08:00.481 "bdev_name": "Null1", 00:08:00.481 "name": "Null1", 00:08:00.481 "nguid": "65945D72940441098C90517C33FB5B71", 00:08:00.481 "uuid": "65945d72-9404-4109-8c90-517c33fb5b71" 00:08:00.481 } 00:08:00.481 ] 00:08:00.481 }, 00:08:00.481 { 00:08:00.481 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:00.481 "subtype": "NVMe", 00:08:00.481 "listen_addresses": [ 00:08:00.481 { 00:08:00.481 "trtype": "TCP", 00:08:00.481 "adrfam": "IPv4", 00:08:00.481 "traddr": "10.0.0.2", 00:08:00.481 "trsvcid": "4420" 00:08:00.481 } 00:08:00.481 ], 00:08:00.481 "allow_any_host": true, 00:08:00.481 "hosts": [], 00:08:00.481 "serial_number": "SPDK00000000000002", 00:08:00.481 "model_number": "SPDK bdev Controller", 00:08:00.481 "max_namespaces": 32, 00:08:00.481 "min_cntlid": 1, 00:08:00.481 "max_cntlid": 65519, 00:08:00.481 "namespaces": [ 00:08:00.481 { 00:08:00.481 "nsid": 1, 00:08:00.481 "bdev_name": "Null2", 00:08:00.481 "name": "Null2", 00:08:00.481 "nguid": "018352BF5D764A8CB9ED12B9838F5081", 00:08:00.482 "uuid": "018352bf-5d76-4a8c-b9ed-12b9838f5081" 00:08:00.482 } 00:08:00.482 ] 00:08:00.482 }, 00:08:00.482 { 00:08:00.482 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:00.482 "subtype": "NVMe", 00:08:00.482 "listen_addresses": [ 00:08:00.482 { 00:08:00.482 "trtype": "TCP", 00:08:00.482 "adrfam": "IPv4", 00:08:00.482 "traddr": "10.0.0.2", 00:08:00.482 "trsvcid": "4420" 00:08:00.482 } 00:08:00.482 ], 00:08:00.482 "allow_any_host": true, 00:08:00.482 "hosts": [], 00:08:00.482 "serial_number": "SPDK00000000000003", 00:08:00.482 "model_number": "SPDK bdev Controller", 00:08:00.482 "max_namespaces": 32, 00:08:00.482 "min_cntlid": 1, 00:08:00.482 "max_cntlid": 65519, 00:08:00.482 "namespaces": [ 00:08:00.482 { 00:08:00.482 "nsid": 1, 00:08:00.482 "bdev_name": "Null3", 00:08:00.482 "name": "Null3", 00:08:00.482 "nguid": "E44DA1B8BEE34D158688E22597460EED", 00:08:00.482 "uuid": "e44da1b8-bee3-4d15-8688-e22597460eed" 00:08:00.482 } 00:08:00.482 ] 00:08:00.482 }, 00:08:00.482 { 00:08:00.482 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:00.482 "subtype": "NVMe", 00:08:00.482 "listen_addresses": [ 00:08:00.482 { 00:08:00.482 "trtype": "TCP", 00:08:00.482 "adrfam": "IPv4", 00:08:00.482 "traddr": "10.0.0.2", 00:08:00.482 "trsvcid": "4420" 00:08:00.482 } 00:08:00.482 ], 00:08:00.482 "allow_any_host": true, 00:08:00.482 "hosts": [], 00:08:00.482 "serial_number": "SPDK00000000000004", 00:08:00.482 "model_number": "SPDK bdev Controller", 00:08:00.482 "max_namespaces": 32, 00:08:00.482 "min_cntlid": 1, 00:08:00.482 "max_cntlid": 65519, 00:08:00.482 "namespaces": [ 00:08:00.482 { 00:08:00.482 "nsid": 1, 00:08:00.482 "bdev_name": "Null4", 00:08:00.482 "name": "Null4", 00:08:00.482 "nguid": "516FB72331244A0D96FD8D3F0FF500D4", 00:08:00.482 "uuid": "516fb723-3124-4a0d-96fd-8d3f0ff500d4" 00:08:00.482 } 00:08:00.482 ] 00:08:00.482 } 00:08:00.482 ] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.482 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.742 rmmod nvme_tcp 00:08:00.742 rmmod nvme_fabrics 00:08:00.742 rmmod nvme_keyring 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1407137 ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1407137 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1407137 ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1407137 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407137 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407137' 00:08:00.742 killing process with pid 1407137 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1407137 00:08:00.742 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1407137 00:08:01.002 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.003 20:44:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.916 20:44:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.916 00:08:02.916 real 0m10.924s 00:08:02.916 user 0m8.357s 00:08:02.916 sys 0m5.501s 00:08:02.916 20:44:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.916 20:44:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.916 ************************************ 00:08:02.916 END TEST nvmf_target_discovery 00:08:02.916 ************************************ 00:08:03.177 20:44:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.177 20:44:06 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:03.177 20:44:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.177 20:44:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.177 20:44:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.177 ************************************ 00:08:03.177 START TEST nvmf_referrals 00:08:03.177 ************************************ 00:08:03.177 20:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:03.177 * Looking for test storage... 00:08:03.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.177 20:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.177 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:03.177 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.177 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:03.178 20:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.178 20:44:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:11.326 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:11.326 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.326 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:11.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:11.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.327 20:44:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:08:11.327 00:08:11.327 --- 10.0.0.2 ping statistics --- 00:08:11.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.327 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:08:11.327 00:08:11.327 --- 10.0.0.1 ping statistics --- 00:08:11.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.327 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1411819 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1411819 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1411819 ']' 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.327 20:44:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 [2024-07-15 20:44:14.292898] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:08:11.327 [2024-07-15 20:44:14.292967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.327 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.327 [2024-07-15 20:44:14.369061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.327 [2024-07-15 20:44:14.444000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.327 [2024-07-15 20:44:14.444040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.327 [2024-07-15 20:44:14.444047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.327 [2024-07-15 20:44:14.444054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.327 [2024-07-15 20:44:14.444059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.327 [2024-07-15 20:44:14.444110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.327 [2024-07-15 20:44:14.444146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.327 [2024-07-15 20:44:14.444303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.327 [2024-07-15 20:44:14.444303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 [2024-07-15 20:44:15.113864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 [2024-07-15 20:44:15.130026] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.327 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.588 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.589 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.849 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.850 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.111 20:44:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.370 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.630 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.890 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:13.150 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.151 rmmod nvme_tcp 00:08:13.151 rmmod nvme_fabrics 00:08:13.151 rmmod nvme_keyring 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1411819 ']' 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1411819 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1411819 ']' 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1411819 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1411819 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1411819' 00:08:13.151 killing process with pid 1411819 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1411819 00:08:13.151 20:44:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1411819 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.411 20:44:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.321 20:44:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.321 00:08:15.321 real 0m12.320s 00:08:15.321 user 0m13.601s 00:08:15.321 sys 0m6.127s 00:08:15.321 20:44:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.321 20:44:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.321 ************************************ 00:08:15.321 END TEST nvmf_referrals 00:08:15.321 ************************************ 00:08:15.582 20:44:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.582 20:44:19 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.582 20:44:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.582 20:44:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.582 20:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.582 ************************************ 00:08:15.582 START TEST nvmf_connect_disconnect 00:08:15.583 ************************************ 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.583 * Looking for test storage... 00:08:15.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.583 20:44:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.728 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:08:23.729 00:08:23.729 --- 10.0.0.2 ping statistics --- 00:08:23.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.729 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:08:23.729 00:08:23.729 --- 10.0.0.1 ping statistics --- 00:08:23.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.729 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1416603 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1416603 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1416603 ']' 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.729 20:44:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 [2024-07-15 20:44:26.658915] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:08:23.729 [2024-07-15 20:44:26.658983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.729 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.729 [2024-07-15 20:44:26.729791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.729 [2024-07-15 20:44:26.805339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.729 [2024-07-15 20:44:26.805374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.729 [2024-07-15 20:44:26.805381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.729 [2024-07-15 20:44:26.805388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.729 [2024-07-15 20:44:26.805394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.729 [2024-07-15 20:44:26.805530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.729 [2024-07-15 20:44:26.805659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.729 [2024-07-15 20:44:26.805818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.729 [2024-07-15 20:44:26.805819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 [2024-07-15 20:44:27.480750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.729 [2024-07-15 20:44:27.540054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:23.729 20:44:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:27.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.097 rmmod nvme_tcp 00:08:42.097 rmmod nvme_fabrics 00:08:42.097 rmmod nvme_keyring 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1416603 ']' 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1416603 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1416603 ']' 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1416603 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416603 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416603' 00:08:42.097 killing process with pid 1416603 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1416603 00:08:42.097 20:44:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1416603 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.357 20:44:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.269 20:44:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.269 00:08:44.269 real 0m28.866s 00:08:44.269 user 1m18.750s 00:08:44.269 sys 0m6.634s 00:08:44.269 20:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.270 20:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:44.270 ************************************ 00:08:44.270 END TEST nvmf_connect_disconnect 00:08:44.270 ************************************ 00:08:44.530 20:44:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.530 20:44:48 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:44.530 20:44:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.530 20:44:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.530 20:44:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.530 ************************************ 00:08:44.530 START TEST nvmf_multitarget 00:08:44.530 ************************************ 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:44.530 * Looking for test storage... 00:08:44.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.530 20:44:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.531 20:44:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.117 20:44:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.117 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:51.118 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:51.118 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.118 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:51.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:51.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.379 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:08:51.640 00:08:51.640 --- 10.0.0.2 ping statistics --- 00:08:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.640 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:08:51.640 00:08:51.640 --- 10.0.0.1 ping statistics --- 00:08:51.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.640 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1424633 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1424633 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1424633 ']' 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.640 20:44:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 [2024-07-15 20:44:55.434360] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:08:51.640 [2024-07-15 20:44:55.434430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.640 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.640 [2024-07-15 20:44:55.506594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.901 [2024-07-15 20:44:55.576895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.901 [2024-07-15 20:44:55.576932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.901 [2024-07-15 20:44:55.576940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.901 [2024-07-15 20:44:55.576947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.901 [2024-07-15 20:44:55.576952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.901 [2024-07-15 20:44:55.577021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.901 [2024-07-15 20:44:55.577155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.901 [2024-07-15 20:44:55.577255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.901 [2024-07-15 20:44:55.577256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:52.472 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:52.733 "nvmf_tgt_1" 00:08:52.733 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:52.733 "nvmf_tgt_2" 00:08:52.733 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.733 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:52.995 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.995 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.995 true 00:08:52.995 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.995 true 00:08:52.995 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.995 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:53.255 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:53.255 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:53.255 20:44:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.256 20:44:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.256 rmmod nvme_tcp 00:08:53.256 rmmod nvme_fabrics 00:08:53.256 rmmod nvme_keyring 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1424633 ']' 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1424633 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1424633 ']' 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1424633 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1424633 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1424633' 00:08:53.256 killing process with pid 1424633 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1424633 00:08:53.256 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1424633 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.517 20:44:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.430 20:44:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.430 00:08:55.430 real 0m11.066s 00:08:55.430 user 0m9.246s 00:08:55.430 sys 0m5.670s 00:08:55.430 20:44:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.430 20:44:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:55.430 ************************************ 00:08:55.430 END TEST nvmf_multitarget 00:08:55.430 ************************************ 00:08:55.691 20:44:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:55.691 20:44:59 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.691 20:44:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:55.691 20:44:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.691 20:44:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.691 ************************************ 00:08:55.691 START TEST nvmf_rpc 00:08:55.691 ************************************ 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.691 * Looking for test storage... 00:08:55.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.691 20:44:59 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.692 20:44:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.285 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:02.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:02.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:02.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:02.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.587 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:09:02.851 00:09:02.851 --- 10.0.0.2 ping statistics --- 00:09:02.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.851 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:09:02.851 00:09:02.851 --- 10.0.0.1 ping statistics --- 00:09:02.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.851 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1429196 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1429196 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1429196 ']' 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.851 20:45:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.851 [2024-07-15 20:45:06.595822] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:09:02.851 [2024-07-15 20:45:06.595887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.851 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.851 [2024-07-15 20:45:06.666070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.851 [2024-07-15 20:45:06.741484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.851 [2024-07-15 20:45:06.741523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.851 [2024-07-15 20:45:06.741531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.851 [2024-07-15 20:45:06.741541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.851 [2024-07-15 20:45:06.741547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.851 [2024-07-15 20:45:06.741689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.851 [2024-07-15 20:45:06.741801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.851 [2024-07-15 20:45:06.741958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.851 [2024-07-15 20:45:06.741959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:03.791 "tick_rate": 2400000000, 00:09:03.791 "poll_groups": [ 00:09:03.791 { 00:09:03.791 "name": "nvmf_tgt_poll_group_000", 00:09:03.791 "admin_qpairs": 0, 00:09:03.791 "io_qpairs": 0, 00:09:03.791 "current_admin_qpairs": 0, 00:09:03.791 "current_io_qpairs": 0, 00:09:03.791 "pending_bdev_io": 0, 00:09:03.791 "completed_nvme_io": 0, 00:09:03.791 "transports": [] 00:09:03.791 }, 00:09:03.791 { 00:09:03.791 "name": "nvmf_tgt_poll_group_001", 00:09:03.791 "admin_qpairs": 0, 00:09:03.791 "io_qpairs": 0, 00:09:03.791 "current_admin_qpairs": 0, 00:09:03.791 "current_io_qpairs": 0, 00:09:03.791 "pending_bdev_io": 0, 00:09:03.791 "completed_nvme_io": 0, 00:09:03.791 "transports": [] 00:09:03.791 }, 00:09:03.791 { 00:09:03.791 "name": "nvmf_tgt_poll_group_002", 00:09:03.791 "admin_qpairs": 0, 00:09:03.791 "io_qpairs": 0, 00:09:03.791 "current_admin_qpairs": 0, 00:09:03.791 "current_io_qpairs": 0, 00:09:03.791 "pending_bdev_io": 0, 00:09:03.791 "completed_nvme_io": 0, 00:09:03.791 "transports": [] 00:09:03.791 }, 00:09:03.791 { 00:09:03.791 "name": "nvmf_tgt_poll_group_003", 00:09:03.791 "admin_qpairs": 0, 00:09:03.791 "io_qpairs": 0, 00:09:03.791 "current_admin_qpairs": 0, 00:09:03.791 "current_io_qpairs": 0, 00:09:03.791 "pending_bdev_io": 0, 00:09:03.791 "completed_nvme_io": 0, 00:09:03.791 "transports": [] 00:09:03.791 } 00:09:03.791 ] 00:09:03.791 }' 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.791 [2024-07-15 20:45:07.548113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.791 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:03.791 "tick_rate": 2400000000, 00:09:03.791 "poll_groups": [ 00:09:03.791 { 00:09:03.791 "name": "nvmf_tgt_poll_group_000", 00:09:03.791 "admin_qpairs": 0, 00:09:03.791 "io_qpairs": 0, 00:09:03.791 "current_admin_qpairs": 0, 00:09:03.791 "current_io_qpairs": 0, 00:09:03.792 "pending_bdev_io": 0, 00:09:03.792 "completed_nvme_io": 0, 00:09:03.792 "transports": [ 00:09:03.792 { 00:09:03.792 "trtype": "TCP" 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 }, 00:09:03.792 { 00:09:03.792 "name": "nvmf_tgt_poll_group_001", 00:09:03.792 "admin_qpairs": 0, 00:09:03.792 "io_qpairs": 0, 00:09:03.792 "current_admin_qpairs": 0, 00:09:03.792 "current_io_qpairs": 0, 00:09:03.792 "pending_bdev_io": 0, 00:09:03.792 "completed_nvme_io": 0, 00:09:03.792 "transports": [ 00:09:03.792 { 00:09:03.792 "trtype": "TCP" 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 }, 00:09:03.792 { 00:09:03.792 "name": "nvmf_tgt_poll_group_002", 00:09:03.792 "admin_qpairs": 0, 00:09:03.792 "io_qpairs": 0, 00:09:03.792 "current_admin_qpairs": 0, 00:09:03.792 "current_io_qpairs": 0, 00:09:03.792 "pending_bdev_io": 0, 00:09:03.792 "completed_nvme_io": 0, 00:09:03.792 "transports": [ 00:09:03.792 { 00:09:03.792 "trtype": "TCP" 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 }, 00:09:03.792 { 00:09:03.792 "name": "nvmf_tgt_poll_group_003", 00:09:03.792 "admin_qpairs": 0, 00:09:03.792 "io_qpairs": 0, 00:09:03.792 "current_admin_qpairs": 0, 00:09:03.792 "current_io_qpairs": 0, 00:09:03.792 "pending_bdev_io": 0, 00:09:03.792 "completed_nvme_io": 0, 00:09:03.792 "transports": [ 00:09:03.792 { 00:09:03.792 "trtype": "TCP" 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 } 00:09:03.792 ] 00:09:03.792 }' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.792 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 Malloc1 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 [2024-07-15 20:45:07.735907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:04.051 [2024-07-15 20:45:07.762752] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:04.051 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:04.051 could not add new controller: failed to write to nvme-fabrics device 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.051 20:45:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.959 20:45:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.959 20:45:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:05.959 20:45:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.959 20:45:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:05.959 20:45:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.871 [2024-07-15 20:45:11.531171] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:07.871 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:07.871 could not add new controller: failed to write to nvme-fabrics device 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.871 20:45:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.254 20:45:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.254 20:45:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.254 20:45:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.254 20:45:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.254 20:45:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.794 [2024-07-15 20:45:15.282594] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.794 20:45:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.178 20:45:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.178 20:45:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:13.178 20:45:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.178 20:45:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:13.179 20:45:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:15.090 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.350 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:15.350 20:45:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.350 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 [2024-07-15 20:45:19.034243] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.350 20:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.734 20:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.734 20:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.734 20:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.734 20:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.734 20:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.274 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.274 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.274 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 [2024-07-15 20:45:22.819314] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.275 20:45:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.660 20:45:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.660 20:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.660 20:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.660 20:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.660 20:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.571 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:22.572 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 [2024-07-15 20:45:26.581940] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.833 20:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.751 20:45:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.751 20:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.751 20:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.751 20:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:24.752 20:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 [2024-07-15 20:45:30.328986] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.747 20:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.130 20:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.130 20:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.130 20:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.130 20:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:28.130 20:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.042 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.042 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.042 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.303 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.303 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.303 20:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:30.303 20:45:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 [2024-07-15 20:45:34.084657] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 [2024-07-15 20:45:34.144799] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.303 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 [2024-07-15 20:45:34.208986] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 [2024-07-15 20:45:34.269165] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 [2024-07-15 20:45:34.325364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.564 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:30.564 "tick_rate": 2400000000, 00:09:30.564 "poll_groups": [ 00:09:30.564 { 00:09:30.564 "name": "nvmf_tgt_poll_group_000", 00:09:30.564 "admin_qpairs": 0, 00:09:30.565 "io_qpairs": 224, 00:09:30.565 "current_admin_qpairs": 0, 00:09:30.565 "current_io_qpairs": 0, 00:09:30.565 "pending_bdev_io": 0, 00:09:30.565 "completed_nvme_io": 273, 00:09:30.565 "transports": [ 00:09:30.565 { 00:09:30.565 "trtype": "TCP" 00:09:30.565 } 00:09:30.565 ] 00:09:30.565 }, 00:09:30.565 { 00:09:30.565 "name": "nvmf_tgt_poll_group_001", 00:09:30.565 "admin_qpairs": 1, 00:09:30.565 "io_qpairs": 223, 00:09:30.565 "current_admin_qpairs": 0, 00:09:30.565 "current_io_qpairs": 0, 00:09:30.565 "pending_bdev_io": 0, 00:09:30.565 "completed_nvme_io": 409, 00:09:30.565 "transports": [ 00:09:30.565 { 00:09:30.565 "trtype": "TCP" 00:09:30.565 } 00:09:30.565 ] 00:09:30.565 }, 00:09:30.565 { 00:09:30.565 "name": "nvmf_tgt_poll_group_002", 00:09:30.565 "admin_qpairs": 6, 00:09:30.565 "io_qpairs": 218, 00:09:30.565 "current_admin_qpairs": 0, 00:09:30.565 "current_io_qpairs": 0, 00:09:30.565 "pending_bdev_io": 0, 00:09:30.565 "completed_nvme_io": 221, 00:09:30.565 "transports": [ 00:09:30.565 { 00:09:30.565 "trtype": "TCP" 00:09:30.565 } 00:09:30.565 ] 00:09:30.565 }, 00:09:30.565 { 00:09:30.565 "name": "nvmf_tgt_poll_group_003", 00:09:30.565 "admin_qpairs": 0, 00:09:30.565 "io_qpairs": 224, 00:09:30.565 "current_admin_qpairs": 0, 00:09:30.565 "current_io_qpairs": 0, 00:09:30.565 "pending_bdev_io": 0, 00:09:30.565 "completed_nvme_io": 336, 00:09:30.565 "transports": [ 00:09:30.565 { 00:09:30.565 "trtype": "TCP" 00:09:30.565 } 00:09:30.565 ] 00:09:30.565 } 00:09:30.565 ] 00:09:30.565 }' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.565 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.825 rmmod nvme_tcp 00:09:30.825 rmmod nvme_fabrics 00:09:30.825 rmmod nvme_keyring 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1429196 ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1429196 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1429196 ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1429196 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429196 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429196' 00:09:30.825 killing process with pid 1429196 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1429196 00:09:30.825 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1429196 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.086 20:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.999 20:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:32.999 00:09:32.999 real 0m37.452s 00:09:32.999 user 1m53.883s 00:09:32.999 sys 0m7.133s 00:09:32.999 20:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.999 20:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.999 ************************************ 00:09:32.999 END TEST nvmf_rpc 00:09:32.999 ************************************ 00:09:32.999 20:45:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.000 20:45:36 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.000 20:45:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.000 20:45:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.000 20:45:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.261 ************************************ 00:09:33.261 START TEST nvmf_invalid 00:09:33.261 ************************************ 00:09:33.261 20:45:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.261 * Looking for test storage... 00:09:33.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.261 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.262 20:45:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:41.455 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:41.455 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:41.455 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:41.455 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.455 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.456 20:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:09:41.456 00:09:41.456 --- 10.0.0.2 ping statistics --- 00:09:41.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.456 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:09:41.456 00:09:41.456 --- 10.0.0.1 ping statistics --- 00:09:41.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.456 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1439523 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1439523 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1439523 ']' 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.456 20:45:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.456 [2024-07-15 20:45:44.360343] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:09:41.456 [2024-07-15 20:45:44.360404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.456 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.456 [2024-07-15 20:45:44.428334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.456 [2024-07-15 20:45:44.492894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.456 [2024-07-15 20:45:44.492933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.456 [2024-07-15 20:45:44.492940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.456 [2024-07-15 20:45:44.492946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.456 [2024-07-15 20:45:44.492952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.456 [2024-07-15 20:45:44.493088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.456 [2024-07-15 20:45:44.493197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.456 [2024-07-15 20:45:44.493546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.456 [2024-07-15 20:45:44.493547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:41.456 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12832 00:09:41.456 [2024-07-15 20:45:45.321143] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:41.717 { 00:09:41.717 "nqn": "nqn.2016-06.io.spdk:cnode12832", 00:09:41.717 "tgt_name": "foobar", 00:09:41.717 "method": "nvmf_create_subsystem", 00:09:41.717 "req_id": 1 00:09:41.717 } 00:09:41.717 Got JSON-RPC error response 00:09:41.717 response: 00:09:41.717 { 00:09:41.717 "code": -32603, 00:09:41.717 "message": "Unable to find target foobar" 00:09:41.717 }' 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:41.717 { 00:09:41.717 "nqn": "nqn.2016-06.io.spdk:cnode12832", 00:09:41.717 "tgt_name": "foobar", 00:09:41.717 "method": "nvmf_create_subsystem", 00:09:41.717 "req_id": 1 00:09:41.717 } 00:09:41.717 Got JSON-RPC error response 00:09:41.717 response: 00:09:41.717 { 00:09:41.717 "code": -32603, 00:09:41.717 "message": "Unable to find target foobar" 00:09:41.717 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21206 00:09:41.717 [2024-07-15 20:45:45.497708] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21206: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:41.717 { 00:09:41.717 "nqn": "nqn.2016-06.io.spdk:cnode21206", 00:09:41.717 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.717 "method": "nvmf_create_subsystem", 00:09:41.717 "req_id": 1 00:09:41.717 } 00:09:41.717 Got JSON-RPC error response 00:09:41.717 response: 00:09:41.717 { 00:09:41.717 "code": -32602, 00:09:41.717 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.717 }' 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:41.717 { 00:09:41.717 "nqn": "nqn.2016-06.io.spdk:cnode21206", 00:09:41.717 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.717 "method": "nvmf_create_subsystem", 00:09:41.717 "req_id": 1 00:09:41.717 } 00:09:41.717 Got JSON-RPC error response 00:09:41.717 response: 00:09:41.717 { 00:09:41.717 "code": -32602, 00:09:41.717 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.717 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:41.717 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13745 00:09:41.978 [2024-07-15 20:45:45.674303] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13745: invalid model number 'SPDK_Controller' 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:41.978 { 00:09:41.978 "nqn": "nqn.2016-06.io.spdk:cnode13745", 00:09:41.978 "model_number": "SPDK_Controller\u001f", 00:09:41.978 "method": "nvmf_create_subsystem", 00:09:41.978 "req_id": 1 00:09:41.978 } 00:09:41.978 Got JSON-RPC error response 00:09:41.978 response: 00:09:41.978 { 00:09:41.978 "code": -32602, 00:09:41.978 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.978 }' 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:41.978 { 00:09:41.978 "nqn": "nqn.2016-06.io.spdk:cnode13745", 00:09:41.978 "model_number": "SPDK_Controller\u001f", 00:09:41.978 "method": "nvmf_create_subsystem", 00:09:41.978 "req_id": 1 00:09:41.978 } 00:09:41.978 Got JSON-RPC error response 00:09:41.978 response: 00:09:41.978 { 00:09:41.978 "code": -32602, 00:09:41.978 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.978 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:41.978 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:09:41.979 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'D)bTAaww3tZw?UzY%LQ-)' 00:09:42.240 20:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'D)bTAaww3tZw?UzY%LQ-)' nqn.2016-06.io.spdk:cnode21385 00:09:42.240 [2024-07-15 20:45:46.011339] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21385: invalid serial number 'D)bTAaww3tZw?UzY%LQ-)' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:42.240 { 00:09:42.240 "nqn": "nqn.2016-06.io.spdk:cnode21385", 00:09:42.240 "serial_number": "D)bTAaww3tZw?UzY%LQ-)", 00:09:42.240 "method": "nvmf_create_subsystem", 00:09:42.240 "req_id": 1 00:09:42.240 } 00:09:42.240 Got JSON-RPC error response 00:09:42.240 response: 00:09:42.240 { 00:09:42.240 "code": -32602, 00:09:42.240 "message": "Invalid SN D)bTAaww3tZw?UzY%LQ-)" 00:09:42.240 }' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:42.240 { 00:09:42.240 "nqn": "nqn.2016-06.io.spdk:cnode21385", 00:09:42.240 "serial_number": "D)bTAaww3tZw?UzY%LQ-)", 00:09:42.240 "method": "nvmf_create_subsystem", 00:09:42.240 "req_id": 1 00:09:42.240 } 00:09:42.240 Got JSON-RPC error response 00:09:42.240 response: 00:09:42.240 { 00:09:42.240 "code": -32602, 00:09:42.240 "message": "Invalid SN D)bTAaww3tZw?UzY%LQ-)" 00:09:42.240 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.240 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:42.501 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:09:42.502 20:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kC[ /dev/null' 00:09:44.584 20:45:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.495 20:45:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.495 00:09:46.495 real 0m13.421s 00:09:46.495 user 0m19.239s 00:09:46.495 sys 0m6.283s 00:09:46.495 20:45:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.495 20:45:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:46.495 ************************************ 00:09:46.495 END TEST nvmf_invalid 00:09:46.495 ************************************ 00:09:46.495 20:45:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:46.495 20:45:50 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.495 20:45:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.495 20:45:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.495 20:45:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.755 ************************************ 00:09:46.755 START TEST nvmf_abort 00:09:46.755 ************************************ 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.755 * Looking for test storage... 00:09:46.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.755 20:45:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.756 20:45:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.911 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.911 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.911 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.911 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.911 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:09:54.912 00:09:54.912 --- 10.0.0.2 ping statistics --- 00:09:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.912 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:09:54.912 00:09:54.912 --- 10.0.0.1 ping statistics --- 00:09:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.912 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1444688 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1444688 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1444688 ']' 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.912 20:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 [2024-07-15 20:45:57.755374] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:09:54.912 [2024-07-15 20:45:57.755470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.912 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.912 [2024-07-15 20:45:57.846165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.912 [2024-07-15 20:45:57.939726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.912 [2024-07-15 20:45:57.939782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.912 [2024-07-15 20:45:57.939791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.912 [2024-07-15 20:45:57.939799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.912 [2024-07-15 20:45:57.939805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.912 [2024-07-15 20:45:57.939938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.912 [2024-07-15 20:45:57.940106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.912 [2024-07-15 20:45:57.940107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 [2024-07-15 20:45:58.585022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 Malloc0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 Delay0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 [2024-07-15 20:45:58.664892] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.912 20:45:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:54.912 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.912 [2024-07-15 20:45:58.775900] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.475 Initializing NVMe Controllers 00:09:57.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:57.475 controller IO queue size 128 less than required 00:09:57.475 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:57.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:57.475 Initialization complete. Launching workers. 00:09:57.475 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 30735 00:09:57.475 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30799, failed to submit 62 00:09:57.475 success 30739, unsuccess 60, failed 0 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.475 20:46:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.475 rmmod nvme_tcp 00:09:57.475 rmmod nvme_fabrics 00:09:57.475 rmmod nvme_keyring 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1444688 ']' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1444688 ']' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1444688' 00:09:57.475 killing process with pid 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1444688 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.475 20:46:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.021 20:46:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.021 00:10:00.021 real 0m12.884s 00:10:00.021 user 0m13.735s 00:10:00.021 sys 0m6.196s 00:10:00.021 20:46:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.021 20:46:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.021 ************************************ 00:10:00.021 END TEST nvmf_abort 00:10:00.021 ************************************ 00:10:00.021 20:46:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:00.021 20:46:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:00.021 20:46:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.021 20:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.021 20:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.021 ************************************ 00:10:00.021 START TEST nvmf_ns_hotplug_stress 00:10:00.021 ************************************ 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:00.021 * Looking for test storage... 00:10:00.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.021 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.022 20:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.609 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.609 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.609 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.609 20:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.609 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.609 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.609 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.609 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.609 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:10:06.610 00:10:06.610 --- 10.0.0.2 ping statistics --- 00:10:06.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.610 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:10:06.610 00:10:06.610 --- 10.0.0.1 ping statistics --- 00:10:06.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.610 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1449383 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1449383 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1449383 ']' 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.610 20:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.610 [2024-07-15 20:46:10.308575] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:10:06.610 [2024-07-15 20:46:10.308624] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.610 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.610 [2024-07-15 20:46:10.390199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.610 [2024-07-15 20:46:10.465136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.610 [2024-07-15 20:46:10.465190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.610 [2024-07-15 20:46:10.465198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.610 [2024-07-15 20:46:10.465205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.610 [2024-07-15 20:46:10.465211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.610 [2024-07-15 20:46:10.465334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.610 [2024-07-15 20:46:10.465625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.610 [2024-07-15 20:46:10.465625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.551 [2024-07-15 20:46:11.265994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.551 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.811 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.811 [2024-07-15 20:46:11.607498] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.811 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.071 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:08.071 Malloc0 00:10:08.331 20:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.331 Delay0 00:10:08.331 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.591 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:08.591 NULL1 00:10:08.591 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:08.852 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1449791 00:10:08.852 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:08.852 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:08.852 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.852 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.112 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.112 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:09.112 20:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:09.372 true 00:10:09.372 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:09.372 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.631 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.631 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:09.631 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:09.891 true 00:10:09.891 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:09.891 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.151 20:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.151 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:10.151 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:10.411 true 00:10:10.411 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:10.411 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.672 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.672 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:10.672 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:10.932 true 00:10:10.932 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:10.932 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.192 20:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.192 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:11.192 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:11.452 true 00:10:11.452 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:11.452 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.712 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.712 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:11.712 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:11.973 true 00:10:11.973 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:11.973 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.973 20:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.232 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:12.232 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:12.492 true 00:10:12.492 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:12.492 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.492 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.750 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:12.750 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:13.009 true 00:10:13.009 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:13.009 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.009 20:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.269 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:13.269 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:13.269 true 00:10:13.530 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:13.530 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.530 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.790 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:13.790 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:13.790 true 00:10:13.790 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:13.790 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.050 20:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.326 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:14.326 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:14.326 true 00:10:14.326 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:14.326 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.586 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.847 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:14.847 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:14.847 true 00:10:14.847 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:14.847 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.108 20:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.368 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:15.368 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:15.368 true 00:10:15.368 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:15.368 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.629 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.890 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:15.890 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:15.890 true 00:10:15.890 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:15.890 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.151 20:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.412 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:16.412 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:16.412 true 00:10:16.412 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:16.412 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.672 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.672 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:16.672 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:16.933 true 00:10:16.933 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:16.933 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.196 20:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.196 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:17.196 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:17.529 true 00:10:17.529 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:17.529 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.529 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.789 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:17.789 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:18.049 true 00:10:18.049 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:18.049 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.049 20:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.309 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:18.309 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:18.309 true 00:10:18.570 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:18.570 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.570 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.831 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:18.831 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:18.831 true 00:10:18.831 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:18.831 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.092 20:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.353 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:19.353 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:19.353 true 00:10:19.353 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:19.353 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.613 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.893 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:19.893 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:19.893 true 00:10:19.893 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:19.893 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.153 20:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.153 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:20.153 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:20.413 true 00:10:20.413 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:20.413 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.674 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.674 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:20.674 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:20.934 true 00:10:20.934 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:20.934 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.195 20:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.195 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:21.195 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:21.455 true 00:10:21.455 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:21.455 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.715 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.715 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:21.716 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:21.976 true 00:10:21.976 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:21.976 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.236 20:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.236 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:22.236 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:22.495 true 00:10:22.495 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:22.495 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.495 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.755 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:22.755 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:23.015 true 00:10:23.015 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:23.015 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.015 20:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.275 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:23.275 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:23.536 true 00:10:23.536 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:23.536 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.536 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.796 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:23.796 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:24.056 true 00:10:24.056 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:24.057 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.057 20:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.318 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:24.318 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:24.578 true 00:10:24.578 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:24.578 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.578 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.839 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:24.839 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:24.839 true 00:10:25.100 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:25.100 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.100 20:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.360 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:25.360 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:25.360 true 00:10:25.620 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:25.620 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.620 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.880 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:25.880 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:25.880 true 00:10:25.880 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:25.880 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.140 20:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.400 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:26.400 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:26.400 true 00:10:26.400 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:26.400 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.660 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.920 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:26.920 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:26.920 true 00:10:26.920 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:26.920 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.179 20:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.439 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:27.439 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:27.439 true 00:10:27.439 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:27.439 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.720 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.980 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:27.980 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:27.980 true 00:10:27.980 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:27.980 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.239 20:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.499 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:28.499 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:28.499 true 00:10:28.499 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:28.499 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.759 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.019 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:29.019 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:29.019 true 00:10:29.019 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:29.019 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.297 20:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.297 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:29.297 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:29.555 true 00:10:29.555 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:29.555 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.815 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.815 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:29.815 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:30.075 true 00:10:30.075 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:30.075 20:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.334 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.334 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:30.334 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:30.593 true 00:10:30.593 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:30.593 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.853 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.853 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:30.853 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:31.112 true 00:10:31.112 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:31.112 20:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.372 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.372 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:31.372 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:31.632 true 00:10:31.632 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:31.632 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.892 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.892 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:31.892 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:32.153 true 00:10:32.153 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:32.153 20:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.413 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.413 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:32.413 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:32.674 true 00:10:32.674 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:32.674 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.934 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.934 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:32.934 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:33.195 true 00:10:33.195 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:33.195 20:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.195 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.459 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:33.459 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:33.758 true 00:10:33.758 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:33.758 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.758 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.019 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:34.019 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:34.019 true 00:10:34.280 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:34.280 20:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.280 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.541 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:34.541 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:34.541 true 00:10:34.802 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:34.802 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.802 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.062 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:35.062 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:35.062 true 00:10:35.323 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:35.323 20:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.323 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.583 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:35.583 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:35.583 true 00:10:35.844 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:35.844 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.844 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.105 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:36.105 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:36.105 true 00:10:36.105 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:36.105 20:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.367 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.628 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:36.628 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:36.628 true 00:10:36.628 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:36.628 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.889 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.149 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:37.149 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:37.149 true 00:10:37.149 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:37.149 20:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.411 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.672 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:37.672 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:37.672 true 00:10:37.672 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:37.672 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.933 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.193 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:38.193 20:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:38.193 true 00:10:38.193 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:38.193 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.459 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.459 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:38.459 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:38.720 true 00:10:38.720 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:38.720 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.982 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.982 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:38.982 20:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:39.241 Initializing NVMe Controllers 00:10:39.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.242 Controller IO queue size 128, less than required. 00:10:39.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:39.242 Initialization complete. Launching workers. 00:10:39.242 ======================================================== 00:10:39.242 Latency(us) 00:10:39.242 Device Information : IOPS MiB/s Average min max 00:10:39.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30980.65 15.13 4131.53 2279.57 10001.32 00:10:39.242 ======================================================== 00:10:39.242 Total : 30980.65 15.13 4131.53 2279.57 10001.32 00:10:39.242 00:10:39.242 true 00:10:39.242 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1449791 00:10:39.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1449791) - No such process 00:10:39.242 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1449791 00:10:39.242 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.502 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:39.761 null0 00:10:39.761 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.761 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.761 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:40.022 null1 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:40.022 null2 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.022 20:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:40.282 null3 00:10:40.282 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.282 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.282 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:40.282 null4 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:40.543 null5 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.543 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:40.802 null6 00:10:40.802 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.802 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.802 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:40.802 null7 00:10:40.802 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.802 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1456297 1456298 1456300 1456302 1456304 1456306 1456308 1456309 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.063 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.064 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.325 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.325 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.586 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.587 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.848 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.109 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.110 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.110 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.110 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.110 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.110 20:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.371 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.632 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.893 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.154 20:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.154 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.154 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.154 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.154 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.415 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.416 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.676 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.677 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.938 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.198 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.457 rmmod nvme_tcp 00:10:44.457 rmmod nvme_fabrics 00:10:44.457 rmmod nvme_keyring 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1449383 ']' 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1449383 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1449383 ']' 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1449383 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.457 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1449383 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1449383' 00:10:44.730 killing process with pid 1449383 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1449383 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1449383 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.730 20:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.661 20:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.661 00:10:46.661 real 0m47.189s 00:10:46.661 user 3m13.432s 00:10:46.661 sys 0m16.327s 00:10:46.661 20:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.661 20:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.922 ************************************ 00:10:46.922 END TEST nvmf_ns_hotplug_stress 00:10:46.922 ************************************ 00:10:46.922 20:46:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.922 20:46:50 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:46.922 20:46:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.922 20:46:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.922 20:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.922 ************************************ 00:10:46.922 START TEST nvmf_connect_stress 00:10:46.922 ************************************ 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:46.922 * Looking for test storage... 00:10:46.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.922 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.923 20:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:55.066 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:55.066 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:55.066 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:55.066 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:10:55.066 00:10:55.066 --- 10.0.0.2 ping statistics --- 00:10:55.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.066 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:10:55.066 00:10:55.066 --- 10.0.0.1 ping statistics --- 00:10:55.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.066 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.066 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.067 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.067 20:46:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1461448 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1461448 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1461448 ']' 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 [2024-07-15 20:46:58.075398] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:10:55.067 [2024-07-15 20:46:58.075464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.067 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.067 [2024-07-15 20:46:58.162709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.067 [2024-07-15 20:46:58.256387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.067 [2024-07-15 20:46:58.256449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.067 [2024-07-15 20:46:58.256456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.067 [2024-07-15 20:46:58.256463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.067 [2024-07-15 20:46:58.256469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.067 [2024-07-15 20:46:58.256595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.067 [2024-07-15 20:46:58.256767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.067 [2024-07-15 20:46:58.256767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 [2024-07-15 20:46:58.913330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 [2024-07-15 20:46:58.937752] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.067 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.328 NULL1 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1461691 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.328 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.651 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.651 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:55.651 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.651 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.651 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.911 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.911 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:55.911 20:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.911 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.911 20:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.172 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.172 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:56.172 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.172 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.172 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.741 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.742 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:56.742 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.742 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.742 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.002 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.002 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:57.002 20:47:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.002 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.002 20:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.262 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.262 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:57.262 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.262 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.262 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.522 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.522 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:57.522 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.522 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.522 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.782 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.782 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:57.783 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.783 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.783 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.351 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.351 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:58.351 20:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.351 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.351 20:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.611 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.611 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:58.611 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.611 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.611 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.870 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.870 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:58.870 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.870 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.870 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.130 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.130 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:59.130 20:47:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.130 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.130 20:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.699 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.699 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:59.699 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.699 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.699 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.959 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.959 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:10:59.959 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.959 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.959 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.219 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.219 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:00.219 20:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.219 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.219 20:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.479 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.479 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:00.479 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.479 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.479 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.739 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.740 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:00.740 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.740 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.740 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.312 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.312 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:01.312 20:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.312 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.312 20:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.571 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.571 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:01.571 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.571 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.571 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.832 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.832 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:01.832 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.832 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.832 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.092 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:02.092 20:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.092 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.092 20:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.352 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:02.352 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.352 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.352 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.923 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.923 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:02.923 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.923 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.923 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.182 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.182 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:03.182 20:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.182 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.182 20:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.443 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.443 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:03.443 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.443 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.443 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.703 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.703 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:03.703 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.703 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.703 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.963 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.963 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:03.963 20:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.963 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.963 20:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.559 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.559 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:04.559 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.559 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.559 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.819 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.819 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:04.819 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.819 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.819 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.079 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.079 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:05.079 20:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.079 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.079 20:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.338 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1461691 00:11:05.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1461691) - No such process 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1461691 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.338 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.338 rmmod nvme_tcp 00:11:05.338 rmmod nvme_fabrics 00:11:05.338 rmmod nvme_keyring 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1461448 ']' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1461448 ']' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1461448' 00:11:05.598 killing process with pid 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1461448 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.598 20:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.136 20:47:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:08.136 00:11:08.136 real 0m20.859s 00:11:08.136 user 0m42.089s 00:11:08.136 sys 0m8.658s 00:11:08.136 20:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.136 20:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.136 ************************************ 00:11:08.136 END TEST nvmf_connect_stress 00:11:08.136 ************************************ 00:11:08.136 20:47:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:08.136 20:47:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.136 20:47:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:08.136 20:47:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.136 20:47:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.136 ************************************ 00:11:08.136 START TEST nvmf_fused_ordering 00:11:08.136 ************************************ 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.136 * Looking for test storage... 00:11:08.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.136 20:47:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:14.743 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:14.743 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:14.743 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:14.743 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.743 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.744 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:11:15.005 00:11:15.005 --- 10.0.0.2 ping statistics --- 00:11:15.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.005 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.510 ms 00:11:15.005 00:11:15.005 --- 10.0.0.1 ping statistics --- 00:11:15.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.005 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1467834 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1467834 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1467834 ']' 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.005 20:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.005 [2024-07-15 20:47:18.893711] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:11:15.005 [2024-07-15 20:47:18.893779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.265 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.265 [2024-07-15 20:47:18.980819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.265 [2024-07-15 20:47:19.072258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.265 [2024-07-15 20:47:19.072313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.265 [2024-07-15 20:47:19.072327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.265 [2024-07-15 20:47:19.072334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.265 [2024-07-15 20:47:19.072340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.265 [2024-07-15 20:47:19.072366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.836 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.836 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:15.836 20:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.836 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.837 [2024-07-15 20:47:19.716800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.837 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.097 [2024-07-15 20:47:19.741016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.097 NULL1 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.097 20:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:16.097 [2024-07-15 20:47:19.811018] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:11:16.097 [2024-07-15 20:47:19.811061] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468037 ] 00:11:16.097 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.667 Attached to nqn.2016-06.io.spdk:cnode1 00:11:16.667 Namespace ID: 1 size: 1GB 00:11:16.667 fused_ordering(0) 00:11:16.667 fused_ordering(1) 00:11:16.667 fused_ordering(2) 00:11:16.667 fused_ordering(3) 00:11:16.667 fused_ordering(4) 00:11:16.667 fused_ordering(5) 00:11:16.667 fused_ordering(6) 00:11:16.667 fused_ordering(7) 00:11:16.667 fused_ordering(8) 00:11:16.667 fused_ordering(9) 00:11:16.667 fused_ordering(10) 00:11:16.667 fused_ordering(11) 00:11:16.667 fused_ordering(12) 00:11:16.667 fused_ordering(13) 00:11:16.667 fused_ordering(14) 00:11:16.667 fused_ordering(15) 00:11:16.667 fused_ordering(16) 00:11:16.667 fused_ordering(17) 00:11:16.667 fused_ordering(18) 00:11:16.667 fused_ordering(19) 00:11:16.667 fused_ordering(20) 00:11:16.667 fused_ordering(21) 00:11:16.667 fused_ordering(22) 00:11:16.667 fused_ordering(23) 00:11:16.667 fused_ordering(24) 00:11:16.667 fused_ordering(25) 00:11:16.667 fused_ordering(26) 00:11:16.667 fused_ordering(27) 00:11:16.667 fused_ordering(28) 00:11:16.667 fused_ordering(29) 00:11:16.667 fused_ordering(30) 00:11:16.667 fused_ordering(31) 00:11:16.667 fused_ordering(32) 00:11:16.667 fused_ordering(33) 00:11:16.667 fused_ordering(34) 00:11:16.667 fused_ordering(35) 00:11:16.667 fused_ordering(36) 00:11:16.667 fused_ordering(37) 00:11:16.667 fused_ordering(38) 00:11:16.667 fused_ordering(39) 00:11:16.667 fused_ordering(40) 00:11:16.667 fused_ordering(41) 00:11:16.667 fused_ordering(42) 00:11:16.667 fused_ordering(43) 00:11:16.667 fused_ordering(44) 00:11:16.667 fused_ordering(45) 00:11:16.667 fused_ordering(46) 00:11:16.667 fused_ordering(47) 00:11:16.667 fused_ordering(48) 00:11:16.667 fused_ordering(49) 00:11:16.667 fused_ordering(50) 00:11:16.667 fused_ordering(51) 00:11:16.667 fused_ordering(52) 00:11:16.667 fused_ordering(53) 00:11:16.667 fused_ordering(54) 00:11:16.667 fused_ordering(55) 00:11:16.667 fused_ordering(56) 00:11:16.667 fused_ordering(57) 00:11:16.667 fused_ordering(58) 00:11:16.667 fused_ordering(59) 00:11:16.667 fused_ordering(60) 00:11:16.667 fused_ordering(61) 00:11:16.667 fused_ordering(62) 00:11:16.667 fused_ordering(63) 00:11:16.667 fused_ordering(64) 00:11:16.667 fused_ordering(65) 00:11:16.667 fused_ordering(66) 00:11:16.667 fused_ordering(67) 00:11:16.667 fused_ordering(68) 00:11:16.667 fused_ordering(69) 00:11:16.667 fused_ordering(70) 00:11:16.667 fused_ordering(71) 00:11:16.667 fused_ordering(72) 00:11:16.667 fused_ordering(73) 00:11:16.667 fused_ordering(74) 00:11:16.667 fused_ordering(75) 00:11:16.667 fused_ordering(76) 00:11:16.667 fused_ordering(77) 00:11:16.667 fused_ordering(78) 00:11:16.667 fused_ordering(79) 00:11:16.667 fused_ordering(80) 00:11:16.667 fused_ordering(81) 00:11:16.667 fused_ordering(82) 00:11:16.667 fused_ordering(83) 00:11:16.667 fused_ordering(84) 00:11:16.667 fused_ordering(85) 00:11:16.667 fused_ordering(86) 00:11:16.667 fused_ordering(87) 00:11:16.667 fused_ordering(88) 00:11:16.667 fused_ordering(89) 00:11:16.667 fused_ordering(90) 00:11:16.667 fused_ordering(91) 00:11:16.667 fused_ordering(92) 00:11:16.667 fused_ordering(93) 00:11:16.667 fused_ordering(94) 00:11:16.667 fused_ordering(95) 00:11:16.667 fused_ordering(96) 00:11:16.667 fused_ordering(97) 00:11:16.667 fused_ordering(98) 00:11:16.667 fused_ordering(99) 00:11:16.667 fused_ordering(100) 00:11:16.667 fused_ordering(101) 00:11:16.667 fused_ordering(102) 00:11:16.667 fused_ordering(103) 00:11:16.667 fused_ordering(104) 00:11:16.667 fused_ordering(105) 00:11:16.667 fused_ordering(106) 00:11:16.667 fused_ordering(107) 00:11:16.667 fused_ordering(108) 00:11:16.667 fused_ordering(109) 00:11:16.667 fused_ordering(110) 00:11:16.667 fused_ordering(111) 00:11:16.667 fused_ordering(112) 00:11:16.667 fused_ordering(113) 00:11:16.667 fused_ordering(114) 00:11:16.667 fused_ordering(115) 00:11:16.667 fused_ordering(116) 00:11:16.667 fused_ordering(117) 00:11:16.667 fused_ordering(118) 00:11:16.667 fused_ordering(119) 00:11:16.667 fused_ordering(120) 00:11:16.667 fused_ordering(121) 00:11:16.667 fused_ordering(122) 00:11:16.667 fused_ordering(123) 00:11:16.667 fused_ordering(124) 00:11:16.667 fused_ordering(125) 00:11:16.667 fused_ordering(126) 00:11:16.667 fused_ordering(127) 00:11:16.667 fused_ordering(128) 00:11:16.667 fused_ordering(129) 00:11:16.667 fused_ordering(130) 00:11:16.667 fused_ordering(131) 00:11:16.668 fused_ordering(132) 00:11:16.668 fused_ordering(133) 00:11:16.668 fused_ordering(134) 00:11:16.668 fused_ordering(135) 00:11:16.668 fused_ordering(136) 00:11:16.668 fused_ordering(137) 00:11:16.668 fused_ordering(138) 00:11:16.668 fused_ordering(139) 00:11:16.668 fused_ordering(140) 00:11:16.668 fused_ordering(141) 00:11:16.668 fused_ordering(142) 00:11:16.668 fused_ordering(143) 00:11:16.668 fused_ordering(144) 00:11:16.668 fused_ordering(145) 00:11:16.668 fused_ordering(146) 00:11:16.668 fused_ordering(147) 00:11:16.668 fused_ordering(148) 00:11:16.668 fused_ordering(149) 00:11:16.668 fused_ordering(150) 00:11:16.668 fused_ordering(151) 00:11:16.668 fused_ordering(152) 00:11:16.668 fused_ordering(153) 00:11:16.668 fused_ordering(154) 00:11:16.668 fused_ordering(155) 00:11:16.668 fused_ordering(156) 00:11:16.668 fused_ordering(157) 00:11:16.668 fused_ordering(158) 00:11:16.668 fused_ordering(159) 00:11:16.668 fused_ordering(160) 00:11:16.668 fused_ordering(161) 00:11:16.668 fused_ordering(162) 00:11:16.668 fused_ordering(163) 00:11:16.668 fused_ordering(164) 00:11:16.668 fused_ordering(165) 00:11:16.668 fused_ordering(166) 00:11:16.668 fused_ordering(167) 00:11:16.668 fused_ordering(168) 00:11:16.668 fused_ordering(169) 00:11:16.668 fused_ordering(170) 00:11:16.668 fused_ordering(171) 00:11:16.668 fused_ordering(172) 00:11:16.668 fused_ordering(173) 00:11:16.668 fused_ordering(174) 00:11:16.668 fused_ordering(175) 00:11:16.668 fused_ordering(176) 00:11:16.668 fused_ordering(177) 00:11:16.668 fused_ordering(178) 00:11:16.668 fused_ordering(179) 00:11:16.668 fused_ordering(180) 00:11:16.668 fused_ordering(181) 00:11:16.668 fused_ordering(182) 00:11:16.668 fused_ordering(183) 00:11:16.668 fused_ordering(184) 00:11:16.668 fused_ordering(185) 00:11:16.668 fused_ordering(186) 00:11:16.668 fused_ordering(187) 00:11:16.668 fused_ordering(188) 00:11:16.668 fused_ordering(189) 00:11:16.668 fused_ordering(190) 00:11:16.668 fused_ordering(191) 00:11:16.668 fused_ordering(192) 00:11:16.668 fused_ordering(193) 00:11:16.668 fused_ordering(194) 00:11:16.668 fused_ordering(195) 00:11:16.668 fused_ordering(196) 00:11:16.668 fused_ordering(197) 00:11:16.668 fused_ordering(198) 00:11:16.668 fused_ordering(199) 00:11:16.668 fused_ordering(200) 00:11:16.668 fused_ordering(201) 00:11:16.668 fused_ordering(202) 00:11:16.668 fused_ordering(203) 00:11:16.668 fused_ordering(204) 00:11:16.668 fused_ordering(205) 00:11:16.927 fused_ordering(206) 00:11:16.927 fused_ordering(207) 00:11:16.927 fused_ordering(208) 00:11:16.927 fused_ordering(209) 00:11:16.927 fused_ordering(210) 00:11:16.927 fused_ordering(211) 00:11:16.927 fused_ordering(212) 00:11:16.927 fused_ordering(213) 00:11:16.927 fused_ordering(214) 00:11:16.927 fused_ordering(215) 00:11:16.927 fused_ordering(216) 00:11:16.927 fused_ordering(217) 00:11:16.927 fused_ordering(218) 00:11:16.927 fused_ordering(219) 00:11:16.927 fused_ordering(220) 00:11:16.927 fused_ordering(221) 00:11:16.927 fused_ordering(222) 00:11:16.927 fused_ordering(223) 00:11:16.927 fused_ordering(224) 00:11:16.927 fused_ordering(225) 00:11:16.927 fused_ordering(226) 00:11:16.927 fused_ordering(227) 00:11:16.927 fused_ordering(228) 00:11:16.927 fused_ordering(229) 00:11:16.927 fused_ordering(230) 00:11:16.927 fused_ordering(231) 00:11:16.927 fused_ordering(232) 00:11:16.927 fused_ordering(233) 00:11:16.927 fused_ordering(234) 00:11:16.927 fused_ordering(235) 00:11:16.927 fused_ordering(236) 00:11:16.927 fused_ordering(237) 00:11:16.927 fused_ordering(238) 00:11:16.927 fused_ordering(239) 00:11:16.927 fused_ordering(240) 00:11:16.927 fused_ordering(241) 00:11:16.927 fused_ordering(242) 00:11:16.927 fused_ordering(243) 00:11:16.927 fused_ordering(244) 00:11:16.927 fused_ordering(245) 00:11:16.927 fused_ordering(246) 00:11:16.927 fused_ordering(247) 00:11:16.927 fused_ordering(248) 00:11:16.927 fused_ordering(249) 00:11:16.927 fused_ordering(250) 00:11:16.927 fused_ordering(251) 00:11:16.927 fused_ordering(252) 00:11:16.927 fused_ordering(253) 00:11:16.927 fused_ordering(254) 00:11:16.927 fused_ordering(255) 00:11:16.927 fused_ordering(256) 00:11:16.927 fused_ordering(257) 00:11:16.927 fused_ordering(258) 00:11:16.927 fused_ordering(259) 00:11:16.927 fused_ordering(260) 00:11:16.927 fused_ordering(261) 00:11:16.927 fused_ordering(262) 00:11:16.927 fused_ordering(263) 00:11:16.927 fused_ordering(264) 00:11:16.927 fused_ordering(265) 00:11:16.927 fused_ordering(266) 00:11:16.927 fused_ordering(267) 00:11:16.927 fused_ordering(268) 00:11:16.927 fused_ordering(269) 00:11:16.927 fused_ordering(270) 00:11:16.927 fused_ordering(271) 00:11:16.927 fused_ordering(272) 00:11:16.927 fused_ordering(273) 00:11:16.927 fused_ordering(274) 00:11:16.927 fused_ordering(275) 00:11:16.927 fused_ordering(276) 00:11:16.927 fused_ordering(277) 00:11:16.927 fused_ordering(278) 00:11:16.927 fused_ordering(279) 00:11:16.927 fused_ordering(280) 00:11:16.927 fused_ordering(281) 00:11:16.927 fused_ordering(282) 00:11:16.927 fused_ordering(283) 00:11:16.927 fused_ordering(284) 00:11:16.927 fused_ordering(285) 00:11:16.927 fused_ordering(286) 00:11:16.927 fused_ordering(287) 00:11:16.927 fused_ordering(288) 00:11:16.927 fused_ordering(289) 00:11:16.927 fused_ordering(290) 00:11:16.927 fused_ordering(291) 00:11:16.927 fused_ordering(292) 00:11:16.927 fused_ordering(293) 00:11:16.927 fused_ordering(294) 00:11:16.927 fused_ordering(295) 00:11:16.927 fused_ordering(296) 00:11:16.927 fused_ordering(297) 00:11:16.927 fused_ordering(298) 00:11:16.927 fused_ordering(299) 00:11:16.927 fused_ordering(300) 00:11:16.927 fused_ordering(301) 00:11:16.927 fused_ordering(302) 00:11:16.927 fused_ordering(303) 00:11:16.927 fused_ordering(304) 00:11:16.927 fused_ordering(305) 00:11:16.927 fused_ordering(306) 00:11:16.927 fused_ordering(307) 00:11:16.927 fused_ordering(308) 00:11:16.927 fused_ordering(309) 00:11:16.927 fused_ordering(310) 00:11:16.927 fused_ordering(311) 00:11:16.927 fused_ordering(312) 00:11:16.927 fused_ordering(313) 00:11:16.927 fused_ordering(314) 00:11:16.927 fused_ordering(315) 00:11:16.927 fused_ordering(316) 00:11:16.927 fused_ordering(317) 00:11:16.927 fused_ordering(318) 00:11:16.927 fused_ordering(319) 00:11:16.927 fused_ordering(320) 00:11:16.927 fused_ordering(321) 00:11:16.927 fused_ordering(322) 00:11:16.927 fused_ordering(323) 00:11:16.927 fused_ordering(324) 00:11:16.927 fused_ordering(325) 00:11:16.927 fused_ordering(326) 00:11:16.927 fused_ordering(327) 00:11:16.927 fused_ordering(328) 00:11:16.927 fused_ordering(329) 00:11:16.927 fused_ordering(330) 00:11:16.927 fused_ordering(331) 00:11:16.927 fused_ordering(332) 00:11:16.927 fused_ordering(333) 00:11:16.927 fused_ordering(334) 00:11:16.927 fused_ordering(335) 00:11:16.927 fused_ordering(336) 00:11:16.927 fused_ordering(337) 00:11:16.927 fused_ordering(338) 00:11:16.927 fused_ordering(339) 00:11:16.927 fused_ordering(340) 00:11:16.927 fused_ordering(341) 00:11:16.927 fused_ordering(342) 00:11:16.927 fused_ordering(343) 00:11:16.927 fused_ordering(344) 00:11:16.927 fused_ordering(345) 00:11:16.927 fused_ordering(346) 00:11:16.927 fused_ordering(347) 00:11:16.927 fused_ordering(348) 00:11:16.927 fused_ordering(349) 00:11:16.927 fused_ordering(350) 00:11:16.927 fused_ordering(351) 00:11:16.927 fused_ordering(352) 00:11:16.927 fused_ordering(353) 00:11:16.927 fused_ordering(354) 00:11:16.927 fused_ordering(355) 00:11:16.927 fused_ordering(356) 00:11:16.927 fused_ordering(357) 00:11:16.928 fused_ordering(358) 00:11:16.928 fused_ordering(359) 00:11:16.928 fused_ordering(360) 00:11:16.928 fused_ordering(361) 00:11:16.928 fused_ordering(362) 00:11:16.928 fused_ordering(363) 00:11:16.928 fused_ordering(364) 00:11:16.928 fused_ordering(365) 00:11:16.928 fused_ordering(366) 00:11:16.928 fused_ordering(367) 00:11:16.928 fused_ordering(368) 00:11:16.928 fused_ordering(369) 00:11:16.928 fused_ordering(370) 00:11:16.928 fused_ordering(371) 00:11:16.928 fused_ordering(372) 00:11:16.928 fused_ordering(373) 00:11:16.928 fused_ordering(374) 00:11:16.928 fused_ordering(375) 00:11:16.928 fused_ordering(376) 00:11:16.928 fused_ordering(377) 00:11:16.928 fused_ordering(378) 00:11:16.928 fused_ordering(379) 00:11:16.928 fused_ordering(380) 00:11:16.928 fused_ordering(381) 00:11:16.928 fused_ordering(382) 00:11:16.928 fused_ordering(383) 00:11:16.928 fused_ordering(384) 00:11:16.928 fused_ordering(385) 00:11:16.928 fused_ordering(386) 00:11:16.928 fused_ordering(387) 00:11:16.928 fused_ordering(388) 00:11:16.928 fused_ordering(389) 00:11:16.928 fused_ordering(390) 00:11:16.928 fused_ordering(391) 00:11:16.928 fused_ordering(392) 00:11:16.928 fused_ordering(393) 00:11:16.928 fused_ordering(394) 00:11:16.928 fused_ordering(395) 00:11:16.928 fused_ordering(396) 00:11:16.928 fused_ordering(397) 00:11:16.928 fused_ordering(398) 00:11:16.928 fused_ordering(399) 00:11:16.928 fused_ordering(400) 00:11:16.928 fused_ordering(401) 00:11:16.928 fused_ordering(402) 00:11:16.928 fused_ordering(403) 00:11:16.928 fused_ordering(404) 00:11:16.928 fused_ordering(405) 00:11:16.928 fused_ordering(406) 00:11:16.928 fused_ordering(407) 00:11:16.928 fused_ordering(408) 00:11:16.928 fused_ordering(409) 00:11:16.928 fused_ordering(410) 00:11:17.517 fused_ordering(411) 00:11:17.517 fused_ordering(412) 00:11:17.517 fused_ordering(413) 00:11:17.517 fused_ordering(414) 00:11:17.517 fused_ordering(415) 00:11:17.517 fused_ordering(416) 00:11:17.517 fused_ordering(417) 00:11:17.517 fused_ordering(418) 00:11:17.517 fused_ordering(419) 00:11:17.517 fused_ordering(420) 00:11:17.517 fused_ordering(421) 00:11:17.517 fused_ordering(422) 00:11:17.517 fused_ordering(423) 00:11:17.517 fused_ordering(424) 00:11:17.517 fused_ordering(425) 00:11:17.517 fused_ordering(426) 00:11:17.517 fused_ordering(427) 00:11:17.517 fused_ordering(428) 00:11:17.517 fused_ordering(429) 00:11:17.517 fused_ordering(430) 00:11:17.517 fused_ordering(431) 00:11:17.517 fused_ordering(432) 00:11:17.517 fused_ordering(433) 00:11:17.517 fused_ordering(434) 00:11:17.517 fused_ordering(435) 00:11:17.517 fused_ordering(436) 00:11:17.517 fused_ordering(437) 00:11:17.517 fused_ordering(438) 00:11:17.517 fused_ordering(439) 00:11:17.517 fused_ordering(440) 00:11:17.517 fused_ordering(441) 00:11:17.517 fused_ordering(442) 00:11:17.517 fused_ordering(443) 00:11:17.517 fused_ordering(444) 00:11:17.517 fused_ordering(445) 00:11:17.517 fused_ordering(446) 00:11:17.517 fused_ordering(447) 00:11:17.517 fused_ordering(448) 00:11:17.517 fused_ordering(449) 00:11:17.517 fused_ordering(450) 00:11:17.517 fused_ordering(451) 00:11:17.517 fused_ordering(452) 00:11:17.517 fused_ordering(453) 00:11:17.517 fused_ordering(454) 00:11:17.517 fused_ordering(455) 00:11:17.517 fused_ordering(456) 00:11:17.517 fused_ordering(457) 00:11:17.517 fused_ordering(458) 00:11:17.517 fused_ordering(459) 00:11:17.517 fused_ordering(460) 00:11:17.517 fused_ordering(461) 00:11:17.517 fused_ordering(462) 00:11:17.517 fused_ordering(463) 00:11:17.517 fused_ordering(464) 00:11:17.517 fused_ordering(465) 00:11:17.517 fused_ordering(466) 00:11:17.517 fused_ordering(467) 00:11:17.517 fused_ordering(468) 00:11:17.517 fused_ordering(469) 00:11:17.517 fused_ordering(470) 00:11:17.517 fused_ordering(471) 00:11:17.517 fused_ordering(472) 00:11:17.517 fused_ordering(473) 00:11:17.517 fused_ordering(474) 00:11:17.517 fused_ordering(475) 00:11:17.517 fused_ordering(476) 00:11:17.517 fused_ordering(477) 00:11:17.517 fused_ordering(478) 00:11:17.517 fused_ordering(479) 00:11:17.517 fused_ordering(480) 00:11:17.517 fused_ordering(481) 00:11:17.517 fused_ordering(482) 00:11:17.517 fused_ordering(483) 00:11:17.517 fused_ordering(484) 00:11:17.517 fused_ordering(485) 00:11:17.517 fused_ordering(486) 00:11:17.517 fused_ordering(487) 00:11:17.517 fused_ordering(488) 00:11:17.517 fused_ordering(489) 00:11:17.517 fused_ordering(490) 00:11:17.517 fused_ordering(491) 00:11:17.517 fused_ordering(492) 00:11:17.517 fused_ordering(493) 00:11:17.517 fused_ordering(494) 00:11:17.517 fused_ordering(495) 00:11:17.517 fused_ordering(496) 00:11:17.517 fused_ordering(497) 00:11:17.517 fused_ordering(498) 00:11:17.517 fused_ordering(499) 00:11:17.517 fused_ordering(500) 00:11:17.517 fused_ordering(501) 00:11:17.517 fused_ordering(502) 00:11:17.517 fused_ordering(503) 00:11:17.517 fused_ordering(504) 00:11:17.517 fused_ordering(505) 00:11:17.517 fused_ordering(506) 00:11:17.517 fused_ordering(507) 00:11:17.517 fused_ordering(508) 00:11:17.517 fused_ordering(509) 00:11:17.517 fused_ordering(510) 00:11:17.517 fused_ordering(511) 00:11:17.517 fused_ordering(512) 00:11:17.517 fused_ordering(513) 00:11:17.517 fused_ordering(514) 00:11:17.517 fused_ordering(515) 00:11:17.517 fused_ordering(516) 00:11:17.517 fused_ordering(517) 00:11:17.517 fused_ordering(518) 00:11:17.517 fused_ordering(519) 00:11:17.517 fused_ordering(520) 00:11:17.517 fused_ordering(521) 00:11:17.517 fused_ordering(522) 00:11:17.517 fused_ordering(523) 00:11:17.517 fused_ordering(524) 00:11:17.517 fused_ordering(525) 00:11:17.517 fused_ordering(526) 00:11:17.517 fused_ordering(527) 00:11:17.517 fused_ordering(528) 00:11:17.517 fused_ordering(529) 00:11:17.517 fused_ordering(530) 00:11:17.517 fused_ordering(531) 00:11:17.517 fused_ordering(532) 00:11:17.517 fused_ordering(533) 00:11:17.517 fused_ordering(534) 00:11:17.517 fused_ordering(535) 00:11:17.517 fused_ordering(536) 00:11:17.517 fused_ordering(537) 00:11:17.517 fused_ordering(538) 00:11:17.517 fused_ordering(539) 00:11:17.517 fused_ordering(540) 00:11:17.517 fused_ordering(541) 00:11:17.517 fused_ordering(542) 00:11:17.517 fused_ordering(543) 00:11:17.517 fused_ordering(544) 00:11:17.517 fused_ordering(545) 00:11:17.517 fused_ordering(546) 00:11:17.517 fused_ordering(547) 00:11:17.517 fused_ordering(548) 00:11:17.517 fused_ordering(549) 00:11:17.517 fused_ordering(550) 00:11:17.517 fused_ordering(551) 00:11:17.517 fused_ordering(552) 00:11:17.517 fused_ordering(553) 00:11:17.518 fused_ordering(554) 00:11:17.518 fused_ordering(555) 00:11:17.518 fused_ordering(556) 00:11:17.518 fused_ordering(557) 00:11:17.518 fused_ordering(558) 00:11:17.518 fused_ordering(559) 00:11:17.518 fused_ordering(560) 00:11:17.518 fused_ordering(561) 00:11:17.518 fused_ordering(562) 00:11:17.518 fused_ordering(563) 00:11:17.518 fused_ordering(564) 00:11:17.518 fused_ordering(565) 00:11:17.518 fused_ordering(566) 00:11:17.518 fused_ordering(567) 00:11:17.518 fused_ordering(568) 00:11:17.518 fused_ordering(569) 00:11:17.518 fused_ordering(570) 00:11:17.518 fused_ordering(571) 00:11:17.518 fused_ordering(572) 00:11:17.518 fused_ordering(573) 00:11:17.518 fused_ordering(574) 00:11:17.518 fused_ordering(575) 00:11:17.518 fused_ordering(576) 00:11:17.518 fused_ordering(577) 00:11:17.518 fused_ordering(578) 00:11:17.518 fused_ordering(579) 00:11:17.518 fused_ordering(580) 00:11:17.518 fused_ordering(581) 00:11:17.518 fused_ordering(582) 00:11:17.518 fused_ordering(583) 00:11:17.518 fused_ordering(584) 00:11:17.518 fused_ordering(585) 00:11:17.518 fused_ordering(586) 00:11:17.518 fused_ordering(587) 00:11:17.518 fused_ordering(588) 00:11:17.518 fused_ordering(589) 00:11:17.518 fused_ordering(590) 00:11:17.518 fused_ordering(591) 00:11:17.518 fused_ordering(592) 00:11:17.518 fused_ordering(593) 00:11:17.518 fused_ordering(594) 00:11:17.518 fused_ordering(595) 00:11:17.518 fused_ordering(596) 00:11:17.518 fused_ordering(597) 00:11:17.518 fused_ordering(598) 00:11:17.518 fused_ordering(599) 00:11:17.518 fused_ordering(600) 00:11:17.518 fused_ordering(601) 00:11:17.518 fused_ordering(602) 00:11:17.518 fused_ordering(603) 00:11:17.518 fused_ordering(604) 00:11:17.518 fused_ordering(605) 00:11:17.518 fused_ordering(606) 00:11:17.518 fused_ordering(607) 00:11:17.518 fused_ordering(608) 00:11:17.518 fused_ordering(609) 00:11:17.518 fused_ordering(610) 00:11:17.518 fused_ordering(611) 00:11:17.518 fused_ordering(612) 00:11:17.518 fused_ordering(613) 00:11:17.518 fused_ordering(614) 00:11:17.518 fused_ordering(615) 00:11:18.089 fused_ordering(616) 00:11:18.089 fused_ordering(617) 00:11:18.089 fused_ordering(618) 00:11:18.089 fused_ordering(619) 00:11:18.089 fused_ordering(620) 00:11:18.089 fused_ordering(621) 00:11:18.089 fused_ordering(622) 00:11:18.089 fused_ordering(623) 00:11:18.089 fused_ordering(624) 00:11:18.089 fused_ordering(625) 00:11:18.089 fused_ordering(626) 00:11:18.089 fused_ordering(627) 00:11:18.089 fused_ordering(628) 00:11:18.089 fused_ordering(629) 00:11:18.089 fused_ordering(630) 00:11:18.089 fused_ordering(631) 00:11:18.089 fused_ordering(632) 00:11:18.089 fused_ordering(633) 00:11:18.089 fused_ordering(634) 00:11:18.089 fused_ordering(635) 00:11:18.089 fused_ordering(636) 00:11:18.089 fused_ordering(637) 00:11:18.089 fused_ordering(638) 00:11:18.089 fused_ordering(639) 00:11:18.089 fused_ordering(640) 00:11:18.089 fused_ordering(641) 00:11:18.089 fused_ordering(642) 00:11:18.089 fused_ordering(643) 00:11:18.089 fused_ordering(644) 00:11:18.089 fused_ordering(645) 00:11:18.089 fused_ordering(646) 00:11:18.089 fused_ordering(647) 00:11:18.089 fused_ordering(648) 00:11:18.089 fused_ordering(649) 00:11:18.089 fused_ordering(650) 00:11:18.089 fused_ordering(651) 00:11:18.089 fused_ordering(652) 00:11:18.089 fused_ordering(653) 00:11:18.089 fused_ordering(654) 00:11:18.089 fused_ordering(655) 00:11:18.089 fused_ordering(656) 00:11:18.089 fused_ordering(657) 00:11:18.089 fused_ordering(658) 00:11:18.089 fused_ordering(659) 00:11:18.089 fused_ordering(660) 00:11:18.089 fused_ordering(661) 00:11:18.089 fused_ordering(662) 00:11:18.089 fused_ordering(663) 00:11:18.089 fused_ordering(664) 00:11:18.089 fused_ordering(665) 00:11:18.089 fused_ordering(666) 00:11:18.089 fused_ordering(667) 00:11:18.089 fused_ordering(668) 00:11:18.089 fused_ordering(669) 00:11:18.089 fused_ordering(670) 00:11:18.089 fused_ordering(671) 00:11:18.089 fused_ordering(672) 00:11:18.089 fused_ordering(673) 00:11:18.089 fused_ordering(674) 00:11:18.089 fused_ordering(675) 00:11:18.089 fused_ordering(676) 00:11:18.089 fused_ordering(677) 00:11:18.089 fused_ordering(678) 00:11:18.089 fused_ordering(679) 00:11:18.089 fused_ordering(680) 00:11:18.089 fused_ordering(681) 00:11:18.089 fused_ordering(682) 00:11:18.089 fused_ordering(683) 00:11:18.089 fused_ordering(684) 00:11:18.089 fused_ordering(685) 00:11:18.089 fused_ordering(686) 00:11:18.089 fused_ordering(687) 00:11:18.089 fused_ordering(688) 00:11:18.089 fused_ordering(689) 00:11:18.089 fused_ordering(690) 00:11:18.089 fused_ordering(691) 00:11:18.089 fused_ordering(692) 00:11:18.089 fused_ordering(693) 00:11:18.089 fused_ordering(694) 00:11:18.089 fused_ordering(695) 00:11:18.089 fused_ordering(696) 00:11:18.089 fused_ordering(697) 00:11:18.089 fused_ordering(698) 00:11:18.089 fused_ordering(699) 00:11:18.089 fused_ordering(700) 00:11:18.089 fused_ordering(701) 00:11:18.089 fused_ordering(702) 00:11:18.089 fused_ordering(703) 00:11:18.089 fused_ordering(704) 00:11:18.089 fused_ordering(705) 00:11:18.089 fused_ordering(706) 00:11:18.089 fused_ordering(707) 00:11:18.089 fused_ordering(708) 00:11:18.089 fused_ordering(709) 00:11:18.089 fused_ordering(710) 00:11:18.089 fused_ordering(711) 00:11:18.089 fused_ordering(712) 00:11:18.089 fused_ordering(713) 00:11:18.089 fused_ordering(714) 00:11:18.089 fused_ordering(715) 00:11:18.089 fused_ordering(716) 00:11:18.089 fused_ordering(717) 00:11:18.089 fused_ordering(718) 00:11:18.089 fused_ordering(719) 00:11:18.089 fused_ordering(720) 00:11:18.089 fused_ordering(721) 00:11:18.089 fused_ordering(722) 00:11:18.089 fused_ordering(723) 00:11:18.089 fused_ordering(724) 00:11:18.089 fused_ordering(725) 00:11:18.089 fused_ordering(726) 00:11:18.089 fused_ordering(727) 00:11:18.089 fused_ordering(728) 00:11:18.089 fused_ordering(729) 00:11:18.089 fused_ordering(730) 00:11:18.089 fused_ordering(731) 00:11:18.089 fused_ordering(732) 00:11:18.089 fused_ordering(733) 00:11:18.089 fused_ordering(734) 00:11:18.089 fused_ordering(735) 00:11:18.089 fused_ordering(736) 00:11:18.089 fused_ordering(737) 00:11:18.089 fused_ordering(738) 00:11:18.089 fused_ordering(739) 00:11:18.089 fused_ordering(740) 00:11:18.089 fused_ordering(741) 00:11:18.089 fused_ordering(742) 00:11:18.089 fused_ordering(743) 00:11:18.089 fused_ordering(744) 00:11:18.089 fused_ordering(745) 00:11:18.089 fused_ordering(746) 00:11:18.089 fused_ordering(747) 00:11:18.089 fused_ordering(748) 00:11:18.089 fused_ordering(749) 00:11:18.089 fused_ordering(750) 00:11:18.089 fused_ordering(751) 00:11:18.089 fused_ordering(752) 00:11:18.089 fused_ordering(753) 00:11:18.089 fused_ordering(754) 00:11:18.089 fused_ordering(755) 00:11:18.089 fused_ordering(756) 00:11:18.089 fused_ordering(757) 00:11:18.089 fused_ordering(758) 00:11:18.089 fused_ordering(759) 00:11:18.089 fused_ordering(760) 00:11:18.089 fused_ordering(761) 00:11:18.089 fused_ordering(762) 00:11:18.089 fused_ordering(763) 00:11:18.089 fused_ordering(764) 00:11:18.089 fused_ordering(765) 00:11:18.089 fused_ordering(766) 00:11:18.089 fused_ordering(767) 00:11:18.089 fused_ordering(768) 00:11:18.089 fused_ordering(769) 00:11:18.089 fused_ordering(770) 00:11:18.089 fused_ordering(771) 00:11:18.089 fused_ordering(772) 00:11:18.089 fused_ordering(773) 00:11:18.089 fused_ordering(774) 00:11:18.089 fused_ordering(775) 00:11:18.089 fused_ordering(776) 00:11:18.089 fused_ordering(777) 00:11:18.089 fused_ordering(778) 00:11:18.089 fused_ordering(779) 00:11:18.089 fused_ordering(780) 00:11:18.089 fused_ordering(781) 00:11:18.089 fused_ordering(782) 00:11:18.089 fused_ordering(783) 00:11:18.089 fused_ordering(784) 00:11:18.089 fused_ordering(785) 00:11:18.089 fused_ordering(786) 00:11:18.089 fused_ordering(787) 00:11:18.089 fused_ordering(788) 00:11:18.089 fused_ordering(789) 00:11:18.089 fused_ordering(790) 00:11:18.089 fused_ordering(791) 00:11:18.089 fused_ordering(792) 00:11:18.089 fused_ordering(793) 00:11:18.089 fused_ordering(794) 00:11:18.089 fused_ordering(795) 00:11:18.089 fused_ordering(796) 00:11:18.089 fused_ordering(797) 00:11:18.089 fused_ordering(798) 00:11:18.089 fused_ordering(799) 00:11:18.089 fused_ordering(800) 00:11:18.089 fused_ordering(801) 00:11:18.089 fused_ordering(802) 00:11:18.089 fused_ordering(803) 00:11:18.089 fused_ordering(804) 00:11:18.089 fused_ordering(805) 00:11:18.089 fused_ordering(806) 00:11:18.089 fused_ordering(807) 00:11:18.089 fused_ordering(808) 00:11:18.089 fused_ordering(809) 00:11:18.089 fused_ordering(810) 00:11:18.089 fused_ordering(811) 00:11:18.089 fused_ordering(812) 00:11:18.089 fused_ordering(813) 00:11:18.089 fused_ordering(814) 00:11:18.089 fused_ordering(815) 00:11:18.089 fused_ordering(816) 00:11:18.089 fused_ordering(817) 00:11:18.089 fused_ordering(818) 00:11:18.089 fused_ordering(819) 00:11:18.089 fused_ordering(820) 00:11:19.031 fused_ordering(821) 00:11:19.031 fused_ordering(822) 00:11:19.031 fused_ordering(823) 00:11:19.031 fused_ordering(824) 00:11:19.031 fused_ordering(825) 00:11:19.031 fused_ordering(826) 00:11:19.031 fused_ordering(827) 00:11:19.031 fused_ordering(828) 00:11:19.031 fused_ordering(829) 00:11:19.031 fused_ordering(830) 00:11:19.031 fused_ordering(831) 00:11:19.031 fused_ordering(832) 00:11:19.031 fused_ordering(833) 00:11:19.031 fused_ordering(834) 00:11:19.031 fused_ordering(835) 00:11:19.031 fused_ordering(836) 00:11:19.031 fused_ordering(837) 00:11:19.031 fused_ordering(838) 00:11:19.031 fused_ordering(839) 00:11:19.031 fused_ordering(840) 00:11:19.031 fused_ordering(841) 00:11:19.031 fused_ordering(842) 00:11:19.031 fused_ordering(843) 00:11:19.031 fused_ordering(844) 00:11:19.031 fused_ordering(845) 00:11:19.031 fused_ordering(846) 00:11:19.031 fused_ordering(847) 00:11:19.031 fused_ordering(848) 00:11:19.031 fused_ordering(849) 00:11:19.031 fused_ordering(850) 00:11:19.031 fused_ordering(851) 00:11:19.031 fused_ordering(852) 00:11:19.031 fused_ordering(853) 00:11:19.031 fused_ordering(854) 00:11:19.031 fused_ordering(855) 00:11:19.031 fused_ordering(856) 00:11:19.031 fused_ordering(857) 00:11:19.031 fused_ordering(858) 00:11:19.031 fused_ordering(859) 00:11:19.031 fused_ordering(860) 00:11:19.031 fused_ordering(861) 00:11:19.031 fused_ordering(862) 00:11:19.031 fused_ordering(863) 00:11:19.031 fused_ordering(864) 00:11:19.031 fused_ordering(865) 00:11:19.031 fused_ordering(866) 00:11:19.031 fused_ordering(867) 00:11:19.031 fused_ordering(868) 00:11:19.031 fused_ordering(869) 00:11:19.031 fused_ordering(870) 00:11:19.031 fused_ordering(871) 00:11:19.031 fused_ordering(872) 00:11:19.031 fused_ordering(873) 00:11:19.031 fused_ordering(874) 00:11:19.031 fused_ordering(875) 00:11:19.031 fused_ordering(876) 00:11:19.031 fused_ordering(877) 00:11:19.031 fused_ordering(878) 00:11:19.031 fused_ordering(879) 00:11:19.031 fused_ordering(880) 00:11:19.031 fused_ordering(881) 00:11:19.031 fused_ordering(882) 00:11:19.031 fused_ordering(883) 00:11:19.031 fused_ordering(884) 00:11:19.031 fused_ordering(885) 00:11:19.031 fused_ordering(886) 00:11:19.031 fused_ordering(887) 00:11:19.031 fused_ordering(888) 00:11:19.031 fused_ordering(889) 00:11:19.031 fused_ordering(890) 00:11:19.031 fused_ordering(891) 00:11:19.031 fused_ordering(892) 00:11:19.031 fused_ordering(893) 00:11:19.031 fused_ordering(894) 00:11:19.031 fused_ordering(895) 00:11:19.031 fused_ordering(896) 00:11:19.031 fused_ordering(897) 00:11:19.031 fused_ordering(898) 00:11:19.031 fused_ordering(899) 00:11:19.031 fused_ordering(900) 00:11:19.031 fused_ordering(901) 00:11:19.031 fused_ordering(902) 00:11:19.031 fused_ordering(903) 00:11:19.031 fused_ordering(904) 00:11:19.031 fused_ordering(905) 00:11:19.031 fused_ordering(906) 00:11:19.031 fused_ordering(907) 00:11:19.031 fused_ordering(908) 00:11:19.031 fused_ordering(909) 00:11:19.031 fused_ordering(910) 00:11:19.031 fused_ordering(911) 00:11:19.031 fused_ordering(912) 00:11:19.031 fused_ordering(913) 00:11:19.031 fused_ordering(914) 00:11:19.031 fused_ordering(915) 00:11:19.031 fused_ordering(916) 00:11:19.031 fused_ordering(917) 00:11:19.031 fused_ordering(918) 00:11:19.031 fused_ordering(919) 00:11:19.031 fused_ordering(920) 00:11:19.031 fused_ordering(921) 00:11:19.031 fused_ordering(922) 00:11:19.031 fused_ordering(923) 00:11:19.031 fused_ordering(924) 00:11:19.031 fused_ordering(925) 00:11:19.031 fused_ordering(926) 00:11:19.031 fused_ordering(927) 00:11:19.031 fused_ordering(928) 00:11:19.031 fused_ordering(929) 00:11:19.031 fused_ordering(930) 00:11:19.031 fused_ordering(931) 00:11:19.031 fused_ordering(932) 00:11:19.031 fused_ordering(933) 00:11:19.031 fused_ordering(934) 00:11:19.031 fused_ordering(935) 00:11:19.031 fused_ordering(936) 00:11:19.031 fused_ordering(937) 00:11:19.031 fused_ordering(938) 00:11:19.031 fused_ordering(939) 00:11:19.031 fused_ordering(940) 00:11:19.031 fused_ordering(941) 00:11:19.031 fused_ordering(942) 00:11:19.031 fused_ordering(943) 00:11:19.031 fused_ordering(944) 00:11:19.031 fused_ordering(945) 00:11:19.031 fused_ordering(946) 00:11:19.031 fused_ordering(947) 00:11:19.031 fused_ordering(948) 00:11:19.031 fused_ordering(949) 00:11:19.031 fused_ordering(950) 00:11:19.031 fused_ordering(951) 00:11:19.031 fused_ordering(952) 00:11:19.031 fused_ordering(953) 00:11:19.031 fused_ordering(954) 00:11:19.031 fused_ordering(955) 00:11:19.031 fused_ordering(956) 00:11:19.031 fused_ordering(957) 00:11:19.031 fused_ordering(958) 00:11:19.031 fused_ordering(959) 00:11:19.031 fused_ordering(960) 00:11:19.031 fused_ordering(961) 00:11:19.031 fused_ordering(962) 00:11:19.031 fused_ordering(963) 00:11:19.031 fused_ordering(964) 00:11:19.031 fused_ordering(965) 00:11:19.031 fused_ordering(966) 00:11:19.031 fused_ordering(967) 00:11:19.031 fused_ordering(968) 00:11:19.031 fused_ordering(969) 00:11:19.031 fused_ordering(970) 00:11:19.031 fused_ordering(971) 00:11:19.031 fused_ordering(972) 00:11:19.031 fused_ordering(973) 00:11:19.031 fused_ordering(974) 00:11:19.031 fused_ordering(975) 00:11:19.031 fused_ordering(976) 00:11:19.031 fused_ordering(977) 00:11:19.031 fused_ordering(978) 00:11:19.031 fused_ordering(979) 00:11:19.031 fused_ordering(980) 00:11:19.031 fused_ordering(981) 00:11:19.031 fused_ordering(982) 00:11:19.031 fused_ordering(983) 00:11:19.031 fused_ordering(984) 00:11:19.031 fused_ordering(985) 00:11:19.031 fused_ordering(986) 00:11:19.031 fused_ordering(987) 00:11:19.031 fused_ordering(988) 00:11:19.031 fused_ordering(989) 00:11:19.031 fused_ordering(990) 00:11:19.031 fused_ordering(991) 00:11:19.031 fused_ordering(992) 00:11:19.031 fused_ordering(993) 00:11:19.031 fused_ordering(994) 00:11:19.031 fused_ordering(995) 00:11:19.031 fused_ordering(996) 00:11:19.031 fused_ordering(997) 00:11:19.031 fused_ordering(998) 00:11:19.031 fused_ordering(999) 00:11:19.032 fused_ordering(1000) 00:11:19.032 fused_ordering(1001) 00:11:19.032 fused_ordering(1002) 00:11:19.032 fused_ordering(1003) 00:11:19.032 fused_ordering(1004) 00:11:19.032 fused_ordering(1005) 00:11:19.032 fused_ordering(1006) 00:11:19.032 fused_ordering(1007) 00:11:19.032 fused_ordering(1008) 00:11:19.032 fused_ordering(1009) 00:11:19.032 fused_ordering(1010) 00:11:19.032 fused_ordering(1011) 00:11:19.032 fused_ordering(1012) 00:11:19.032 fused_ordering(1013) 00:11:19.032 fused_ordering(1014) 00:11:19.032 fused_ordering(1015) 00:11:19.032 fused_ordering(1016) 00:11:19.032 fused_ordering(1017) 00:11:19.032 fused_ordering(1018) 00:11:19.032 fused_ordering(1019) 00:11:19.032 fused_ordering(1020) 00:11:19.032 fused_ordering(1021) 00:11:19.032 fused_ordering(1022) 00:11:19.032 fused_ordering(1023) 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.032 rmmod nvme_tcp 00:11:19.032 rmmod nvme_fabrics 00:11:19.032 rmmod nvme_keyring 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1467834 ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1467834 ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467834' 00:11:19.032 killing process with pid 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1467834 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.032 20:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.578 20:47:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.578 00:11:21.578 real 0m13.363s 00:11:21.578 user 0m7.192s 00:11:21.578 sys 0m7.301s 00:11:21.578 20:47:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.578 20:47:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.578 ************************************ 00:11:21.578 END TEST nvmf_fused_ordering 00:11:21.578 ************************************ 00:11:21.578 20:47:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:21.578 20:47:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:21.578 20:47:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.578 20:47:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.578 20:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.578 ************************************ 00:11:21.578 START TEST nvmf_delete_subsystem 00:11:21.578 ************************************ 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:21.578 * Looking for test storage... 00:11:21.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.578 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.579 20:47:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.219 20:47:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.219 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:11:28.480 00:11:28.480 --- 10.0.0.2 ping statistics --- 00:11:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.480 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:11:28.480 00:11:28.480 --- 10.0.0.1 ping statistics --- 00:11:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.480 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:28.480 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1472836 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1472836 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1472836 ']' 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.481 20:47:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.741 [2024-07-15 20:47:32.417680] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:11:28.741 [2024-07-15 20:47:32.417740] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.741 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.741 [2024-07-15 20:47:32.486935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.741 [2024-07-15 20:47:32.560792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.741 [2024-07-15 20:47:32.560829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.741 [2024-07-15 20:47:32.560836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.741 [2024-07-15 20:47:32.560843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.741 [2024-07-15 20:47:32.560848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.741 [2024-07-15 20:47:32.560990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.741 [2024-07-15 20:47:32.560991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.330 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.330 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:29.330 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.330 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.330 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 [2024-07-15 20:47:33.232410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 [2024-07-15 20:47:33.256555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 NULL1 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 Delay0 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1472873 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:29.590 20:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:29.590 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.590 [2024-07-15 20:47:33.353164] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:31.499 20:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.499 20:47:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.499 20:47:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 starting I/O failed: -6 00:11:31.761 Write completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.761 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 starting I/O failed: -6 00:11:31.762 [2024-07-15 20:47:35.481454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f274800d430 is same with the state(5) to be set 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Read completed with error (sct=0, sc=8) 00:11:31.762 Write completed with error (sct=0, sc=8) 00:11:32.703 [2024-07-15 20:47:36.449668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffdac0 is same with the state(5) to be set 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 [2024-07-15 20:47:36.482332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffc3e0 is same with the state(5) to be set 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 [2024-07-15 20:47:36.482625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffc7a0 is same with the state(5) to be set 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 [2024-07-15 20:47:36.483585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f274800cfe0 is same with the state(5) to be set 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 Read completed with error (sct=0, sc=8) 00:11:32.703 Write completed with error (sct=0, sc=8) 00:11:32.703 [2024-07-15 20:47:36.483854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f274800d740 is same with the state(5) to be set 00:11:32.703 Initializing NVMe Controllers 00:11:32.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.703 Controller IO queue size 128, less than required. 00:11:32.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:32.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:32.703 Initialization complete. Launching workers. 00:11:32.703 ======================================================== 00:11:32.703 Latency(us) 00:11:32.703 Device Information : IOPS MiB/s Average min max 00:11:32.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.26 0.09 898991.24 287.32 1007175.92 00:11:32.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.88 0.08 988492.25 286.34 2002666.96 00:11:32.703 ======================================================== 00:11:32.703 Total : 342.14 0.17 939768.26 286.34 2002666.96 00:11:32.703 00:11:32.703 [2024-07-15 20:47:36.484390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffdac0 (9): Bad file descriptor 00:11:32.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:32.703 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.703 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:32.703 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1472873 00:11:32.703 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1472873 00:11:33.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1472873) - No such process 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1472873 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1472873 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1472873 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.299 20:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.299 [2024-07-15 20:47:37.013975] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1473674 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:33.299 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.299 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.299 [2024-07-15 20:47:37.084658] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:33.900 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.900 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:33.900 20:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.159 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.159 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:34.159 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.727 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.727 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:34.727 20:47:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.298 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.298 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:35.298 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.867 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.867 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:35.867 20:47:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.436 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.436 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:36.436 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.436 Initializing NVMe Controllers 00:11:36.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:36.436 Controller IO queue size 128, less than required. 00:11:36.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:36.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:36.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:36.436 Initialization complete. Launching workers. 00:11:36.436 ======================================================== 00:11:36.436 Latency(us) 00:11:36.436 Device Information : IOPS MiB/s Average min max 00:11:36.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002652.62 1000330.27 1042376.16 00:11:36.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003078.63 1000231.17 1009615.38 00:11:36.436 ======================================================== 00:11:36.436 Total : 256.00 0.12 1002865.62 1000231.17 1042376.16 00:11:36.436 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1473674 00:11:36.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1473674) - No such process 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1473674 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.695 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.695 rmmod nvme_tcp 00:11:36.955 rmmod nvme_fabrics 00:11:36.955 rmmod nvme_keyring 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1472836 ']' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1472836 ']' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1472836' 00:11:36.955 killing process with pid 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1472836 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.955 20:47:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.501 20:47:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.501 00:11:39.501 real 0m17.896s 00:11:39.501 user 0m30.633s 00:11:39.501 sys 0m6.278s 00:11:39.501 20:47:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.501 20:47:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 ************************************ 00:11:39.501 END TEST nvmf_delete_subsystem 00:11:39.501 ************************************ 00:11:39.501 20:47:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.501 20:47:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:39.501 20:47:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.501 20:47:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.501 20:47:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.501 ************************************ 00:11:39.501 START TEST nvmf_ns_masking 00:11:39.501 ************************************ 00:11:39.501 20:47:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:39.501 * Looking for test storage... 00:11:39.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.502 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=efebe98b-03ec-410c-9980-ae558de60a8f 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0718a1b6-fe4c-4757-a7fa-99ef018bcdbe 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=070c368a-97e0-4981-860a-2adbc46a9276 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.503 20:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:46.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:46.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:46.093 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:46.094 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:46.094 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.094 20:47:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:11:46.355 00:11:46.355 --- 10.0.0.2 ping statistics --- 00:11:46.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.355 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:11:46.355 00:11:46.355 --- 10.0.0.1 ping statistics --- 00:11:46.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.355 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.355 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1478545 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1478545 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1478545 ']' 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.616 20:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.616 [2024-07-15 20:47:50.300663] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:11:46.616 [2024-07-15 20:47:50.300720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.616 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.616 [2024-07-15 20:47:50.367689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.616 [2024-07-15 20:47:50.440012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.616 [2024-07-15 20:47:50.440048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.616 [2024-07-15 20:47:50.440055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.616 [2024-07-15 20:47:50.440061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.616 [2024-07-15 20:47:50.440067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.616 [2024-07-15 20:47:50.440085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:47.557 [2024-07-15 20:47:51.263258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:47.557 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:47.818 Malloc1 00:11:47.818 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:47.818 Malloc2 00:11:47.818 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.078 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:48.339 20:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.339 [2024-07-15 20:47:52.121158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.339 20:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:48.339 20:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 070c368a-97e0-4981-860a-2adbc46a9276 -a 10.0.0.2 -s 4420 -i 4 00:11:48.600 20:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.600 20:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.600 20:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.600 20:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.600 20:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.510 [ 0]:0x1 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee08e86d19624511afa2e681d147184c 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee08e86d19624511afa2e681d147184c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.510 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.770 [ 0]:0x1 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee08e86d19624511afa2e681d147184c 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee08e86d19624511afa2e681d147184c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.770 [ 1]:0x2 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.770 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.030 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:51.030 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.030 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:51.030 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.030 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.290 20:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:51.290 20:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:51.290 20:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 070c368a-97e0-4981-860a-2adbc46a9276 -a 10.0.0.2 -s 4420 -i 4 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:51.550 20:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:53.460 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:53.721 [ 0]:0x2 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.721 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.015 [ 0]:0x1 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee08e86d19624511afa2e681d147184c 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee08e86d19624511afa2e681d147184c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.015 [ 1]:0x2 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.015 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.276 20:47:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.276 [ 0]:0x2 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.276 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.536 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:54.536 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 070c368a-97e0-4981-860a-2adbc46a9276 -a 10.0.0.2 -s 4420 -i 4 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:54.796 20:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.705 [ 0]:0x1 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.705 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ee08e86d19624511afa2e681d147184c 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ee08e86d19624511afa2e681d147184c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.966 [ 1]:0x2 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.966 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.227 [ 0]:0x2 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:57.227 20:48:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:57.227 [2024-07-15 20:48:01.091254] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:57.227 request: 00:11:57.227 { 00:11:57.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:57.227 "nsid": 2, 00:11:57.227 "host": "nqn.2016-06.io.spdk:host1", 00:11:57.227 "method": "nvmf_ns_remove_host", 00:11:57.227 "req_id": 1 00:11:57.227 } 00:11:57.227 Got JSON-RPC error response 00:11:57.227 response: 00:11:57.227 { 00:11:57.227 "code": -32602, 00:11:57.227 "message": "Invalid parameters" 00:11:57.227 } 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.488 [ 0]:0x2 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ac21b9fa5a124e29971f32fcdbace85f 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ac21b9fa5a124e29971f32fcdbace85f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1481096 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1481096 /var/tmp/host.sock 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1481096 ']' 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:57.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.488 20:48:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.488 [2024-07-15 20:48:01.356440] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:11:57.488 [2024-07-15 20:48:01.356495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481096 ] 00:11:57.748 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.748 [2024-07-15 20:48:01.433403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.748 [2024-07-15 20:48:01.499551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.317 20:48:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.317 20:48:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:58.317 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.578 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.578 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid efebe98b-03ec-410c-9980-ae558de60a8f 00:11:58.578 20:48:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:58.578 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g EFEBE98B03EC410C9980AE558DE60A8F -i 00:11:58.840 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0718a1b6-fe4c-4757-a7fa-99ef018bcdbe 00:11:58.840 20:48:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:58.840 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0718A1B6FE4C4757A7FA99EF018BCDBE -i 00:11:58.840 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.100 20:48:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:59.360 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.361 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.620 nvme0n1 00:11:59.620 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.620 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:00.191 nvme1n2 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:00.191 20:48:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ efebe98b-03ec-410c-9980-ae558de60a8f == \e\f\e\b\e\9\8\b\-\0\3\e\c\-\4\1\0\c\-\9\9\8\0\-\a\e\5\5\8\d\e\6\0\a\8\f ]] 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0718a1b6-fe4c-4757-a7fa-99ef018bcdbe == \0\7\1\8\a\1\b\6\-\f\e\4\c\-\4\7\5\7\-\a\7\f\a\-\9\9\e\f\0\1\8\b\c\d\b\e ]] 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1481096 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1481096 ']' 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1481096 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.451 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481096 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481096' 00:12:00.711 killing process with pid 1481096 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1481096 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1481096 00:12:00.711 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.972 rmmod nvme_tcp 00:12:00.972 rmmod nvme_fabrics 00:12:00.972 rmmod nvme_keyring 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1478545 ']' 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1478545 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1478545 ']' 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1478545 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.972 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1478545 00:12:01.233 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.233 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.233 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1478545' 00:12:01.233 killing process with pid 1478545 00:12:01.233 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1478545 00:12:01.233 20:48:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1478545 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.233 20:48:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.780 20:48:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.780 00:12:03.780 real 0m24.123s 00:12:03.780 user 0m24.274s 00:12:03.780 sys 0m7.206s 00:12:03.780 20:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.780 20:48:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.780 ************************************ 00:12:03.780 END TEST nvmf_ns_masking 00:12:03.780 ************************************ 00:12:03.780 20:48:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.780 20:48:07 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:03.780 20:48:07 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:03.780 20:48:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.780 20:48:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.780 20:48:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.780 ************************************ 00:12:03.780 START TEST nvmf_nvme_cli 00:12:03.780 ************************************ 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:03.780 * Looking for test storage... 00:12:03.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:03.780 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:03.781 20:48:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.781 20:48:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:10.366 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:10.366 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:10.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:10.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.366 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.626 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:12:10.885 00:12:10.885 --- 10.0.0.2 ping statistics --- 00:12:10.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.885 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:12:10.885 00:12:10.885 --- 10.0.0.1 ping statistics --- 00:12:10.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.885 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.885 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1486319 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1486319 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1486319 ']' 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.886 20:48:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.886 [2024-07-15 20:48:14.630068] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:12:10.886 [2024-07-15 20:48:14.630144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.886 [2024-07-15 20:48:14.701303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.145 [2024-07-15 20:48:14.780172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.145 [2024-07-15 20:48:14.780209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.145 [2024-07-15 20:48:14.780217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.145 [2024-07-15 20:48:14.780224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.145 [2024-07-15 20:48:14.780230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.145 [2024-07-15 20:48:14.780398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.145 [2024-07-15 20:48:14.780525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.145 [2024-07-15 20:48:14.780685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.145 [2024-07-15 20:48:14.780686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.714 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.714 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:11.714 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.714 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.714 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 [2024-07-15 20:48:15.457742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 Malloc0 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 Malloc1 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 [2024-07-15 20:48:15.547579] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.715 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:11.975 00:12:11.975 Discovery Log Number of Records 2, Generation counter 2 00:12:11.975 =====Discovery Log Entry 0====== 00:12:11.975 trtype: tcp 00:12:11.975 adrfam: ipv4 00:12:11.975 subtype: current discovery subsystem 00:12:11.975 treq: not required 00:12:11.975 portid: 0 00:12:11.975 trsvcid: 4420 00:12:11.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:11.975 traddr: 10.0.0.2 00:12:11.975 eflags: explicit discovery connections, duplicate discovery information 00:12:11.975 sectype: none 00:12:11.975 =====Discovery Log Entry 1====== 00:12:11.975 trtype: tcp 00:12:11.975 adrfam: ipv4 00:12:11.975 subtype: nvme subsystem 00:12:11.975 treq: not required 00:12:11.975 portid: 0 00:12:11.975 trsvcid: 4420 00:12:11.975 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:11.975 traddr: 10.0.0.2 00:12:11.975 eflags: none 00:12:11.975 sectype: none 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:11.975 20:48:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:13.885 20:48:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:15.858 /dev/nvme0n1 ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:15.858 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.859 rmmod nvme_tcp 00:12:15.859 rmmod nvme_fabrics 00:12:15.859 rmmod nvme_keyring 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1486319 ']' 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1486319 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1486319 ']' 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1486319 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486319 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486319' 00:12:15.859 killing process with pid 1486319 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1486319 00:12:15.859 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1486319 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.120 20:48:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.034 20:48:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.034 00:12:18.034 real 0m14.723s 00:12:18.034 user 0m22.189s 00:12:18.034 sys 0m6.031s 00:12:18.034 20:48:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.034 20:48:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.034 ************************************ 00:12:18.034 END TEST nvmf_nvme_cli 00:12:18.034 ************************************ 00:12:18.296 20:48:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:18.296 20:48:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:18.296 20:48:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:18.296 20:48:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.296 20:48:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.296 20:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.296 ************************************ 00:12:18.296 START TEST nvmf_vfio_user 00:12:18.296 ************************************ 00:12:18.296 20:48:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:18.296 * Looking for test storage... 00:12:18.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.296 20:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1488120 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1488120' 00:12:18.297 Process pid: 1488120 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1488120 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1488120 ']' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.297 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:18.558 [2024-07-15 20:48:22.190181] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:12:18.558 [2024-07-15 20:48:22.190248] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.558 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.558 [2024-07-15 20:48:22.255095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.558 [2024-07-15 20:48:22.329671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.558 [2024-07-15 20:48:22.329707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.558 [2024-07-15 20:48:22.329715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.558 [2024-07-15 20:48:22.329722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.558 [2024-07-15 20:48:22.329728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.558 [2024-07-15 20:48:22.329869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.558 [2024-07-15 20:48:22.329983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.558 [2024-07-15 20:48:22.330159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.558 [2024-07-15 20:48:22.330178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.129 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.129 20:48:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:19.129 20:48:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:20.525 20:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:20.525 Malloc1 00:12:20.525 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:20.785 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:20.785 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:21.045 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:21.045 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:21.045 20:48:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:21.306 Malloc2 00:12:21.306 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:21.306 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:21.567 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:21.830 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:21.830 [2024-07-15 20:48:25.561038] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:12:21.830 [2024-07-15 20:48:25.561107] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488793 ] 00:12:21.830 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.830 [2024-07-15 20:48:25.593737] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:21.830 [2024-07-15 20:48:25.599081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:21.830 [2024-07-15 20:48:25.599100] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4d1da56000 00:12:21.830 [2024-07-15 20:48:25.600075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:21.830 [2024-07-15 20:48:25.601076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:21.830 [2024-07-15 20:48:25.602080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:21.830 [2024-07-15 20:48:25.603090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:21.831 [2024-07-15 20:48:25.604091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:21.831 [2024-07-15 20:48:25.605103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:21.831 [2024-07-15 20:48:25.606104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:21.831 [2024-07-15 20:48:25.607111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:21.831 [2024-07-15 20:48:25.608119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:21.831 [2024-07-15 20:48:25.608131] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4d1da4b000 00:12:21.831 [2024-07-15 20:48:25.609458] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:21.831 [2024-07-15 20:48:25.630378] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:21.831 [2024-07-15 20:48:25.630403] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:21.831 [2024-07-15 20:48:25.633251] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:21.831 [2024-07-15 20:48:25.633294] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:21.831 [2024-07-15 20:48:25.633376] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:21.831 [2024-07-15 20:48:25.633392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:21.831 [2024-07-15 20:48:25.633398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:21.831 [2024-07-15 20:48:25.634257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:21.831 [2024-07-15 20:48:25.634266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:21.831 [2024-07-15 20:48:25.634273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:21.831 [2024-07-15 20:48:25.635260] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:21.831 [2024-07-15 20:48:25.635269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:21.831 [2024-07-15 20:48:25.635277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.636270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:21.831 [2024-07-15 20:48:25.636278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.637272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:21.831 [2024-07-15 20:48:25.637280] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:21.831 [2024-07-15 20:48:25.637288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.637295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.637401] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:21.831 [2024-07-15 20:48:25.637405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.637411] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:21.831 [2024-07-15 20:48:25.638279] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:21.831 [2024-07-15 20:48:25.639278] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:21.831 [2024-07-15 20:48:25.640289] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:21.831 [2024-07-15 20:48:25.641288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.831 [2024-07-15 20:48:25.641341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:21.831 [2024-07-15 20:48:25.642295] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:21.831 [2024-07-15 20:48:25.642303] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:21.831 [2024-07-15 20:48:25.642307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642328] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:21.831 [2024-07-15 20:48:25.642340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642355] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:21.831 [2024-07-15 20:48:25.642360] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:21.831 [2024-07-15 20:48:25.642372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642410] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:21.831 [2024-07-15 20:48:25.642417] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:21.831 [2024-07-15 20:48:25.642422] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:21.831 [2024-07-15 20:48:25.642427] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:21.831 [2024-07-15 20:48:25.642432] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:21.831 [2024-07-15 20:48:25.642436] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:21.831 [2024-07-15 20:48:25.642443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.831 [2024-07-15 20:48:25.642494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.831 [2024-07-15 20:48:25.642502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.831 [2024-07-15 20:48:25.642511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.831 [2024-07-15 20:48:25.642515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642548] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:21.831 [2024-07-15 20:48:25.642553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642661] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:21.831 [2024-07-15 20:48:25.642665] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:21.831 [2024-07-15 20:48:25.642672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642696] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:21.831 [2024-07-15 20:48:25.642706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642722] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:21.831 [2024-07-15 20:48:25.642726] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:21.831 [2024-07-15 20:48:25.642732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:21.831 [2024-07-15 20:48:25.642757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:21.831 [2024-07-15 20:48:25.642772] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:21.831 [2024-07-15 20:48:25.642776] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:21.831 [2024-07-15 20:48:25.642782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:21.831 [2024-07-15 20:48:25.642796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.642804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642838] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:21.832 [2024-07-15 20:48:25.642843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:21.832 [2024-07-15 20:48:25.642848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:21.832 [2024-07-15 20:48:25.642865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.642877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.642889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.642896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.642908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.642918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.642929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.642936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.642949] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:21.832 [2024-07-15 20:48:25.642954] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:21.832 [2024-07-15 20:48:25.642958] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:21.832 [2024-07-15 20:48:25.642961] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:21.832 [2024-07-15 20:48:25.642968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:21.832 [2024-07-15 20:48:25.642976] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:21.832 [2024-07-15 20:48:25.642980] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:21.832 [2024-07-15 20:48:25.642986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.642993] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:21.832 [2024-07-15 20:48:25.642998] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:21.832 [2024-07-15 20:48:25.643003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.643011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:21.832 [2024-07-15 20:48:25.643016] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:21.832 [2024-07-15 20:48:25.643021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:21.832 [2024-07-15 20:48:25.643028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.643040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.643131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:21.832 [2024-07-15 20:48:25.643139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:21.832 ===================================================== 00:12:21.832 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.832 ===================================================== 00:12:21.832 Controller Capabilities/Features 00:12:21.832 ================================ 00:12:21.832 Vendor ID: 4e58 00:12:21.832 Subsystem Vendor ID: 4e58 00:12:21.832 Serial Number: SPDK1 00:12:21.832 Model Number: SPDK bdev Controller 00:12:21.832 Firmware Version: 24.09 00:12:21.832 Recommended Arb Burst: 6 00:12:21.832 IEEE OUI Identifier: 8d 6b 50 00:12:21.832 Multi-path I/O 00:12:21.832 May have multiple subsystem ports: Yes 00:12:21.832 May have multiple controllers: Yes 00:12:21.832 Associated with SR-IOV VF: No 00:12:21.832 Max Data Transfer Size: 131072 00:12:21.832 Max Number of Namespaces: 32 00:12:21.832 Max Number of I/O Queues: 127 00:12:21.832 NVMe Specification Version (VS): 1.3 00:12:21.832 NVMe Specification Version (Identify): 1.3 00:12:21.832 Maximum Queue Entries: 256 00:12:21.832 Contiguous Queues Required: Yes 00:12:21.832 Arbitration Mechanisms Supported 00:12:21.832 Weighted Round Robin: Not Supported 00:12:21.832 Vendor Specific: Not Supported 00:12:21.832 Reset Timeout: 15000 ms 00:12:21.832 Doorbell Stride: 4 bytes 00:12:21.832 NVM Subsystem Reset: Not Supported 00:12:21.832 Command Sets Supported 00:12:21.832 NVM Command Set: Supported 00:12:21.832 Boot Partition: Not Supported 00:12:21.832 Memory Page Size Minimum: 4096 bytes 00:12:21.832 Memory Page Size Maximum: 4096 bytes 00:12:21.832 Persistent Memory Region: Not Supported 00:12:21.832 Optional Asynchronous Events Supported 00:12:21.832 Namespace Attribute Notices: Supported 00:12:21.832 Firmware Activation Notices: Not Supported 00:12:21.832 ANA Change Notices: Not Supported 00:12:21.832 PLE Aggregate Log Change Notices: Not Supported 00:12:21.832 LBA Status Info Alert Notices: Not Supported 00:12:21.832 EGE Aggregate Log Change Notices: Not Supported 00:12:21.832 Normal NVM Subsystem Shutdown event: Not Supported 00:12:21.832 Zone Descriptor Change Notices: Not Supported 00:12:21.832 Discovery Log Change Notices: Not Supported 00:12:21.832 Controller Attributes 00:12:21.832 128-bit Host Identifier: Supported 00:12:21.832 Non-Operational Permissive Mode: Not Supported 00:12:21.832 NVM Sets: Not Supported 00:12:21.832 Read Recovery Levels: Not Supported 00:12:21.832 Endurance Groups: Not Supported 00:12:21.832 Predictable Latency Mode: Not Supported 00:12:21.832 Traffic Based Keep ALive: Not Supported 00:12:21.832 Namespace Granularity: Not Supported 00:12:21.832 SQ Associations: Not Supported 00:12:21.832 UUID List: Not Supported 00:12:21.832 Multi-Domain Subsystem: Not Supported 00:12:21.832 Fixed Capacity Management: Not Supported 00:12:21.832 Variable Capacity Management: Not Supported 00:12:21.832 Delete Endurance Group: Not Supported 00:12:21.832 Delete NVM Set: Not Supported 00:12:21.832 Extended LBA Formats Supported: Not Supported 00:12:21.832 Flexible Data Placement Supported: Not Supported 00:12:21.832 00:12:21.832 Controller Memory Buffer Support 00:12:21.832 ================================ 00:12:21.832 Supported: No 00:12:21.832 00:12:21.832 Persistent Memory Region Support 00:12:21.832 ================================ 00:12:21.832 Supported: No 00:12:21.832 00:12:21.832 Admin Command Set Attributes 00:12:21.832 ============================ 00:12:21.832 Security Send/Receive: Not Supported 00:12:21.832 Format NVM: Not Supported 00:12:21.832 Firmware Activate/Download: Not Supported 00:12:21.832 Namespace Management: Not Supported 00:12:21.832 Device Self-Test: Not Supported 00:12:21.832 Directives: Not Supported 00:12:21.832 NVMe-MI: Not Supported 00:12:21.832 Virtualization Management: Not Supported 00:12:21.832 Doorbell Buffer Config: Not Supported 00:12:21.832 Get LBA Status Capability: Not Supported 00:12:21.832 Command & Feature Lockdown Capability: Not Supported 00:12:21.832 Abort Command Limit: 4 00:12:21.832 Async Event Request Limit: 4 00:12:21.832 Number of Firmware Slots: N/A 00:12:21.832 Firmware Slot 1 Read-Only: N/A 00:12:21.832 Firmware Activation Without Reset: N/A 00:12:21.832 Multiple Update Detection Support: N/A 00:12:21.832 Firmware Update Granularity: No Information Provided 00:12:21.832 Per-Namespace SMART Log: No 00:12:21.832 Asymmetric Namespace Access Log Page: Not Supported 00:12:21.832 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:21.832 Command Effects Log Page: Supported 00:12:21.832 Get Log Page Extended Data: Supported 00:12:21.832 Telemetry Log Pages: Not Supported 00:12:21.832 Persistent Event Log Pages: Not Supported 00:12:21.832 Supported Log Pages Log Page: May Support 00:12:21.832 Commands Supported & Effects Log Page: Not Supported 00:12:21.832 Feature Identifiers & Effects Log Page:May Support 00:12:21.832 NVMe-MI Commands & Effects Log Page: May Support 00:12:21.832 Data Area 4 for Telemetry Log: Not Supported 00:12:21.832 Error Log Page Entries Supported: 128 00:12:21.832 Keep Alive: Supported 00:12:21.832 Keep Alive Granularity: 10000 ms 00:12:21.832 00:12:21.832 NVM Command Set Attributes 00:12:21.832 ========================== 00:12:21.832 Submission Queue Entry Size 00:12:21.832 Max: 64 00:12:21.832 Min: 64 00:12:21.832 Completion Queue Entry Size 00:12:21.832 Max: 16 00:12:21.832 Min: 16 00:12:21.832 Number of Namespaces: 32 00:12:21.832 Compare Command: Supported 00:12:21.832 Write Uncorrectable Command: Not Supported 00:12:21.832 Dataset Management Command: Supported 00:12:21.832 Write Zeroes Command: Supported 00:12:21.832 Set Features Save Field: Not Supported 00:12:21.832 Reservations: Not Supported 00:12:21.832 Timestamp: Not Supported 00:12:21.832 Copy: Supported 00:12:21.832 Volatile Write Cache: Present 00:12:21.833 Atomic Write Unit (Normal): 1 00:12:21.833 Atomic Write Unit (PFail): 1 00:12:21.833 Atomic Compare & Write Unit: 1 00:12:21.833 Fused Compare & Write: Supported 00:12:21.833 Scatter-Gather List 00:12:21.833 SGL Command Set: Supported (Dword aligned) 00:12:21.833 SGL Keyed: Not Supported 00:12:21.833 SGL Bit Bucket Descriptor: Not Supported 00:12:21.833 SGL Metadata Pointer: Not Supported 00:12:21.833 Oversized SGL: Not Supported 00:12:21.833 SGL Metadata Address: Not Supported 00:12:21.833 SGL Offset: Not Supported 00:12:21.833 Transport SGL Data Block: Not Supported 00:12:21.833 Replay Protected Memory Block: Not Supported 00:12:21.833 00:12:21.833 Firmware Slot Information 00:12:21.833 ========================= 00:12:21.833 Active slot: 1 00:12:21.833 Slot 1 Firmware Revision: 24.09 00:12:21.833 00:12:21.833 00:12:21.833 Commands Supported and Effects 00:12:21.833 ============================== 00:12:21.833 Admin Commands 00:12:21.833 -------------- 00:12:21.833 Get Log Page (02h): Supported 00:12:21.833 Identify (06h): Supported 00:12:21.833 Abort (08h): Supported 00:12:21.833 Set Features (09h): Supported 00:12:21.833 Get Features (0Ah): Supported 00:12:21.833 Asynchronous Event Request (0Ch): Supported 00:12:21.833 Keep Alive (18h): Supported 00:12:21.833 I/O Commands 00:12:21.833 ------------ 00:12:21.833 Flush (00h): Supported LBA-Change 00:12:21.833 Write (01h): Supported LBA-Change 00:12:21.833 Read (02h): Supported 00:12:21.833 Compare (05h): Supported 00:12:21.833 Write Zeroes (08h): Supported LBA-Change 00:12:21.833 Dataset Management (09h): Supported LBA-Change 00:12:21.833 Copy (19h): Supported LBA-Change 00:12:21.833 00:12:21.833 Error Log 00:12:21.833 ========= 00:12:21.833 00:12:21.833 Arbitration 00:12:21.833 =========== 00:12:21.833 Arbitration Burst: 1 00:12:21.833 00:12:21.833 Power Management 00:12:21.833 ================ 00:12:21.833 Number of Power States: 1 00:12:21.833 Current Power State: Power State #0 00:12:21.833 Power State #0: 00:12:21.833 Max Power: 0.00 W 00:12:21.833 Non-Operational State: Operational 00:12:21.833 Entry Latency: Not Reported 00:12:21.833 Exit Latency: Not Reported 00:12:21.833 Relative Read Throughput: 0 00:12:21.833 Relative Read Latency: 0 00:12:21.833 Relative Write Throughput: 0 00:12:21.833 Relative Write Latency: 0 00:12:21.833 Idle Power: Not Reported 00:12:21.833 Active Power: Not Reported 00:12:21.833 Non-Operational Permissive Mode: Not Supported 00:12:21.833 00:12:21.833 Health Information 00:12:21.833 ================== 00:12:21.833 Critical Warnings: 00:12:21.833 Available Spare Space: OK 00:12:21.833 Temperature: OK 00:12:21.833 Device Reliability: OK 00:12:21.833 Read Only: No 00:12:21.833 Volatile Memory Backup: OK 00:12:21.833 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:21.833 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:21.833 Available Spare: 0% 00:12:21.833 Available Sp[2024-07-15 20:48:25.643243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:21.833 [2024-07-15 20:48:25.643252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:21.833 [2024-07-15 20:48:25.643281] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:21.833 [2024-07-15 20:48:25.643290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.833 [2024-07-15 20:48:25.643297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.833 [2024-07-15 20:48:25.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.833 [2024-07-15 20:48:25.643311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.833 [2024-07-15 20:48:25.644310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:21.833 [2024-07-15 20:48:25.644320] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:21.833 [2024-07-15 20:48:25.645311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.833 [2024-07-15 20:48:25.645349] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:21.833 [2024-07-15 20:48:25.645355] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:21.833 [2024-07-15 20:48:25.646318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:21.833 [2024-07-15 20:48:25.646329] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:21.833 [2024-07-15 20:48:25.646385] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:21.833 [2024-07-15 20:48:25.650131] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:21.833 are Threshold: 0% 00:12:21.833 Life Percentage Used: 0% 00:12:21.833 Data Units Read: 0 00:12:21.833 Data Units Written: 0 00:12:21.833 Host Read Commands: 0 00:12:21.833 Host Write Commands: 0 00:12:21.833 Controller Busy Time: 0 minutes 00:12:21.833 Power Cycles: 0 00:12:21.833 Power On Hours: 0 hours 00:12:21.833 Unsafe Shutdowns: 0 00:12:21.833 Unrecoverable Media Errors: 0 00:12:21.833 Lifetime Error Log Entries: 0 00:12:21.833 Warning Temperature Time: 0 minutes 00:12:21.833 Critical Temperature Time: 0 minutes 00:12:21.833 00:12:21.833 Number of Queues 00:12:21.833 ================ 00:12:21.833 Number of I/O Submission Queues: 127 00:12:21.833 Number of I/O Completion Queues: 127 00:12:21.833 00:12:21.833 Active Namespaces 00:12:21.833 ================= 00:12:21.833 Namespace ID:1 00:12:21.833 Error Recovery Timeout: Unlimited 00:12:21.833 Command Set Identifier: NVM (00h) 00:12:21.833 Deallocate: Supported 00:12:21.833 Deallocated/Unwritten Error: Not Supported 00:12:21.833 Deallocated Read Value: Unknown 00:12:21.833 Deallocate in Write Zeroes: Not Supported 00:12:21.833 Deallocated Guard Field: 0xFFFF 00:12:21.833 Flush: Supported 00:12:21.833 Reservation: Supported 00:12:21.833 Namespace Sharing Capabilities: Multiple Controllers 00:12:21.833 Size (in LBAs): 131072 (0GiB) 00:12:21.833 Capacity (in LBAs): 131072 (0GiB) 00:12:21.833 Utilization (in LBAs): 131072 (0GiB) 00:12:21.833 NGUID: 63D2E361C2D849648FFF083ECC97BE6D 00:12:21.833 UUID: 63d2e361-c2d8-4964-8fff-083ecc97be6d 00:12:21.833 Thin Provisioning: Not Supported 00:12:21.833 Per-NS Atomic Units: Yes 00:12:21.833 Atomic Boundary Size (Normal): 0 00:12:21.833 Atomic Boundary Size (PFail): 0 00:12:21.833 Atomic Boundary Offset: 0 00:12:21.833 Maximum Single Source Range Length: 65535 00:12:21.833 Maximum Copy Length: 65535 00:12:21.833 Maximum Source Range Count: 1 00:12:21.833 NGUID/EUI64 Never Reused: No 00:12:21.833 Namespace Write Protected: No 00:12:21.833 Number of LBA Formats: 1 00:12:21.833 Current LBA Format: LBA Format #00 00:12:21.833 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:21.833 00:12:21.833 20:48:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:22.094 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.094 [2024-07-15 20:48:25.833749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.383 Initializing NVMe Controllers 00:12:27.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:27.383 Initialization complete. Launching workers. 00:12:27.383 ======================================================== 00:12:27.383 Latency(us) 00:12:27.383 Device Information : IOPS MiB/s Average min max 00:12:27.383 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39935.77 156.00 3205.03 845.70 6867.28 00:12:27.383 ======================================================== 00:12:27.383 Total : 39935.77 156.00 3205.03 845.70 6867.28 00:12:27.383 00:12:27.383 [2024-07-15 20:48:30.853097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.383 20:48:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:27.383 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.383 [2024-07-15 20:48:31.036956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.670 Initializing NVMe Controllers 00:12:32.670 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:32.670 Initialization complete. Launching workers. 00:12:32.670 ======================================================== 00:12:32.670 Latency(us) 00:12:32.670 Device Information : IOPS MiB/s Average min max 00:12:32.670 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7629.97 7989.74 00:12:32.670 ======================================================== 00:12:32.670 Total : 16051.20 62.70 7980.74 7629.97 7989.74 00:12:32.670 00:12:32.670 [2024-07-15 20:48:36.071791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.670 20:48:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:32.670 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.671 [2024-07-15 20:48:36.259624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.959 [2024-07-15 20:48:41.359443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.959 Initializing NVMe Controllers 00:12:37.959 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:37.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:37.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:37.959 Initialization complete. Launching workers. 00:12:37.959 Starting thread on core 2 00:12:37.959 Starting thread on core 3 00:12:37.959 Starting thread on core 1 00:12:37.959 20:48:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:37.959 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.959 [2024-07-15 20:48:41.617463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.285 [2024-07-15 20:48:44.696721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.285 Initializing NVMe Controllers 00:12:41.285 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.285 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.285 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:41.285 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:41.285 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:41.285 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:41.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:41.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:41.285 Initialization complete. Launching workers. 00:12:41.285 Starting thread on core 1 with urgent priority queue 00:12:41.285 Starting thread on core 2 with urgent priority queue 00:12:41.285 Starting thread on core 3 with urgent priority queue 00:12:41.285 Starting thread on core 0 with urgent priority queue 00:12:41.285 SPDK bdev Controller (SPDK1 ) core 0: 8230.00 IO/s 12.15 secs/100000 ios 00:12:41.285 SPDK bdev Controller (SPDK1 ) core 1: 15038.00 IO/s 6.65 secs/100000 ios 00:12:41.285 SPDK bdev Controller (SPDK1 ) core 2: 9490.67 IO/s 10.54 secs/100000 ios 00:12:41.285 SPDK bdev Controller (SPDK1 ) core 3: 17191.33 IO/s 5.82 secs/100000 ios 00:12:41.285 ======================================================== 00:12:41.285 00:12:41.285 20:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:41.285 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.285 [2024-07-15 20:48:44.958608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.286 Initializing NVMe Controllers 00:12:41.286 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.286 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.286 Namespace ID: 1 size: 0GB 00:12:41.286 Initialization complete. 00:12:41.286 INFO: using host memory buffer for IO 00:12:41.286 Hello world! 00:12:41.286 [2024-07-15 20:48:44.991810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.286 20:48:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:41.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.545 [2024-07-15 20:48:45.257539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.488 Initializing NVMe Controllers 00:12:42.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:42.488 Initialization complete. Launching workers. 00:12:42.488 submit (in ns) avg, min, max = 6974.2, 3897.5, 4001180.0 00:12:42.488 complete (in ns) avg, min, max = 18326.1, 2365.0, 5994710.0 00:12:42.488 00:12:42.488 Submit histogram 00:12:42.488 ================ 00:12:42.488 Range in us Cumulative Count 00:12:42.488 3.893 - 3.920: 1.2054% ( 231) 00:12:42.488 3.920 - 3.947: 8.0781% ( 1317) 00:12:42.488 3.947 - 3.973: 17.9043% ( 1883) 00:12:42.488 3.973 - 4.000: 29.2282% ( 2170) 00:12:42.488 4.000 - 4.027: 40.0981% ( 2083) 00:12:42.488 4.027 - 4.053: 51.9752% ( 2276) 00:12:42.488 4.053 - 4.080: 68.6114% ( 3188) 00:12:42.488 4.080 - 4.107: 83.1029% ( 2777) 00:12:42.488 4.107 - 4.133: 92.5742% ( 1815) 00:12:42.488 4.133 - 4.160: 97.2395% ( 894) 00:12:42.488 4.160 - 4.187: 98.7893% ( 297) 00:12:42.488 4.187 - 4.213: 99.3529% ( 108) 00:12:42.488 4.213 - 4.240: 99.4834% ( 25) 00:12:42.488 4.240 - 4.267: 99.5356% ( 10) 00:12:42.488 4.267 - 4.293: 99.5408% ( 1) 00:12:42.488 4.293 - 4.320: 99.5460% ( 1) 00:12:42.488 4.373 - 4.400: 99.5512% ( 1) 00:12:42.488 4.400 - 4.427: 99.5564% ( 1) 00:12:42.488 4.480 - 4.507: 99.5617% ( 1) 00:12:42.488 4.613 - 4.640: 99.5669% ( 1) 00:12:42.488 4.693 - 4.720: 99.5721% ( 1) 00:12:42.488 5.200 - 5.227: 99.5773% ( 1) 00:12:42.488 5.227 - 5.253: 99.5825% ( 1) 00:12:42.488 5.280 - 5.307: 99.5877% ( 1) 00:12:42.488 5.387 - 5.413: 99.5930% ( 1) 00:12:42.488 5.467 - 5.493: 99.5982% ( 1) 00:12:42.488 5.493 - 5.520: 99.6034% ( 1) 00:12:42.488 5.520 - 5.547: 99.6086% ( 1) 00:12:42.488 5.573 - 5.600: 99.6138% ( 1) 00:12:42.488 5.680 - 5.707: 99.6191% ( 1) 00:12:42.488 5.920 - 5.947: 99.6243% ( 1) 00:12:42.488 6.027 - 6.053: 99.6295% ( 1) 00:12:42.488 6.053 - 6.080: 99.6347% ( 1) 00:12:42.488 6.080 - 6.107: 99.6399% ( 1) 00:12:42.488 6.107 - 6.133: 99.6451% ( 1) 00:12:42.488 6.213 - 6.240: 99.6504% ( 1) 00:12:42.488 6.293 - 6.320: 99.6556% ( 1) 00:12:42.488 6.320 - 6.347: 99.6608% ( 1) 00:12:42.488 6.400 - 6.427: 99.6712% ( 2) 00:12:42.488 6.987 - 7.040: 99.6765% ( 1) 00:12:42.488 7.200 - 7.253: 99.6817% ( 1) 00:12:42.488 7.307 - 7.360: 99.6869% ( 1) 00:12:42.488 7.413 - 7.467: 99.6921% ( 1) 00:12:42.488 7.467 - 7.520: 99.6973% ( 1) 00:12:42.488 7.520 - 7.573: 99.7130% ( 3) 00:12:42.488 7.573 - 7.627: 99.7339% ( 4) 00:12:42.488 7.627 - 7.680: 99.7391% ( 1) 00:12:42.488 7.680 - 7.733: 99.7547% ( 3) 00:12:42.488 7.733 - 7.787: 99.7704% ( 3) 00:12:42.488 7.787 - 7.840: 99.7756% ( 1) 00:12:42.488 8.053 - 8.107: 99.7860% ( 2) 00:12:42.488 8.107 - 8.160: 99.7913% ( 1) 00:12:42.488 8.213 - 8.267: 99.8121% ( 4) 00:12:42.488 8.267 - 8.320: 99.8330% ( 4) 00:12:42.488 8.320 - 8.373: 99.8434% ( 2) 00:12:42.488 8.587 - 8.640: 99.8487% ( 1) 00:12:42.488 8.640 - 8.693: 99.8591% ( 2) 00:12:42.488 8.747 - 8.800: 99.8748% ( 3) 00:12:42.488 8.853 - 8.907: 99.8800% ( 1) 00:12:42.488 8.907 - 8.960: 99.8852% ( 1) 00:12:42.488 9.013 - 9.067: 99.8956% ( 2) 00:12:42.488 9.333 - 9.387: 99.9009% ( 1) 00:12:42.488 9.707 - 9.760: 99.9061% ( 1) 00:12:42.488 9.760 - 9.813: 99.9113% ( 1) 00:12:42.488 11.040 - 11.093: 99.9165% ( 1) 00:12:42.488 13.600 - 13.653: 99.9217% ( 1) 00:12:42.488 16.533 - 16.640: 99.9269% ( 1) 00:12:42.488 3986.773 - 4014.080: 100.0000% ( 14) 00:12:42.488 00:12:42.488 Complete histogram 00:12:42.488 ================== 00:12:42.488 Range in us Cumulative Count 00:12:42.488 2.360 - 2.373: 0.0052% ( 1) 00:12:42.488 2.373 - 2.387: 0.0574% ( 10) 00:12:42.488 2.387 - 2.400: 0.9393% ( 169) 00:12:42.488 2.400 - 2.413: 1.0019% ( 12) 00:12:42.488 2.413 - 2.427: 1.2524% ( 48) 00:12:42.488 2.427 - 2.440: 1.3098% ( 11) 00:12:42.488 2.440 - 2.453: 35.9130% ( 6631) 00:12:42.488 2.453 - 2.467: 57.9763% ( 4228) 00:12:42.488 2.467 - 2.480: 68.5122% ( 2019) 00:12:42.488 2.480 - [2024-07-15 20:48:46.279999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.488 2.493: 77.6027% ( 1742) 00:12:42.488 2.493 - 2.507: 81.1877% ( 687) 00:12:42.488 2.507 - 2.520: 83.7604% ( 493) 00:12:42.488 2.520 - 2.533: 89.7667% ( 1151) 00:12:42.488 2.533 - 2.547: 94.3485% ( 878) 00:12:42.488 2.547 - 2.560: 96.6759% ( 446) 00:12:42.488 2.560 - 2.573: 98.3666% ( 324) 00:12:42.488 2.573 - 2.587: 99.1651% ( 153) 00:12:42.488 2.587 - 2.600: 99.3373% ( 33) 00:12:42.488 2.600 - 2.613: 99.3634% ( 5) 00:12:42.488 2.613 - 2.627: 99.3686% ( 1) 00:12:42.488 4.453 - 4.480: 99.3738% ( 1) 00:12:42.488 4.613 - 4.640: 99.3790% ( 1) 00:12:42.488 4.667 - 4.693: 99.3842% ( 1) 00:12:42.488 4.693 - 4.720: 99.3894% ( 1) 00:12:42.488 4.747 - 4.773: 99.3947% ( 1) 00:12:42.488 4.853 - 4.880: 99.3999% ( 1) 00:12:42.488 4.880 - 4.907: 99.4051% ( 1) 00:12:42.488 4.987 - 5.013: 99.4103% ( 1) 00:12:42.488 5.493 - 5.520: 99.4155% ( 1) 00:12:42.488 5.787 - 5.813: 99.4208% ( 1) 00:12:42.488 5.840 - 5.867: 99.4260% ( 1) 00:12:42.488 5.893 - 5.920: 99.4312% ( 1) 00:12:42.488 5.947 - 5.973: 99.4416% ( 2) 00:12:42.488 5.973 - 6.000: 99.4469% ( 1) 00:12:42.488 6.053 - 6.080: 99.4573% ( 2) 00:12:42.488 6.187 - 6.213: 99.4625% ( 1) 00:12:42.488 6.213 - 6.240: 99.4729% ( 2) 00:12:42.488 6.240 - 6.267: 99.4782% ( 1) 00:12:42.488 6.267 - 6.293: 99.4834% ( 1) 00:12:42.488 6.347 - 6.373: 99.4886% ( 1) 00:12:42.488 6.400 - 6.427: 99.4990% ( 2) 00:12:42.488 6.480 - 6.507: 99.5043% ( 1) 00:12:42.488 6.560 - 6.587: 99.5147% ( 2) 00:12:42.488 6.613 - 6.640: 99.5251% ( 2) 00:12:42.488 6.773 - 6.800: 99.5303% ( 1) 00:12:42.488 6.800 - 6.827: 99.5356% ( 1) 00:12:42.488 6.827 - 6.880: 99.5408% ( 1) 00:12:42.488 6.933 - 6.987: 99.5512% ( 2) 00:12:42.488 6.987 - 7.040: 99.5564% ( 1) 00:12:42.488 7.040 - 7.093: 99.5669% ( 2) 00:12:42.488 7.253 - 7.307: 99.5773% ( 2) 00:12:42.488 7.840 - 7.893: 99.5825% ( 1) 00:12:42.488 7.947 - 8.000: 99.5877% ( 1) 00:12:42.488 13.120 - 13.173: 99.5930% ( 1) 00:12:42.488 17.067 - 17.173: 99.5982% ( 1) 00:12:42.488 44.373 - 44.587: 99.6034% ( 1) 00:12:42.488 1856.853 - 1870.507: 99.6086% ( 1) 00:12:42.488 3986.773 - 4014.080: 99.9948% ( 74) 00:12:42.488 5980.160 - 6007.467: 100.0000% ( 1) 00:12:42.488 00:12:42.488 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:42.488 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:42.488 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:42.488 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:42.488 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:42.749 [ 00:12:42.749 { 00:12:42.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.749 "subtype": "Discovery", 00:12:42.749 "listen_addresses": [], 00:12:42.749 "allow_any_host": true, 00:12:42.749 "hosts": [] 00:12:42.749 }, 00:12:42.749 { 00:12:42.749 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:42.749 "subtype": "NVMe", 00:12:42.749 "listen_addresses": [ 00:12:42.749 { 00:12:42.749 "trtype": "VFIOUSER", 00:12:42.749 "adrfam": "IPv4", 00:12:42.749 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:42.749 "trsvcid": "0" 00:12:42.749 } 00:12:42.749 ], 00:12:42.749 "allow_any_host": true, 00:12:42.749 "hosts": [], 00:12:42.749 "serial_number": "SPDK1", 00:12:42.749 "model_number": "SPDK bdev Controller", 00:12:42.749 "max_namespaces": 32, 00:12:42.749 "min_cntlid": 1, 00:12:42.749 "max_cntlid": 65519, 00:12:42.749 "namespaces": [ 00:12:42.749 { 00:12:42.749 "nsid": 1, 00:12:42.749 "bdev_name": "Malloc1", 00:12:42.749 "name": "Malloc1", 00:12:42.749 "nguid": "63D2E361C2D849648FFF083ECC97BE6D", 00:12:42.749 "uuid": "63d2e361-c2d8-4964-8fff-083ecc97be6d" 00:12:42.749 } 00:12:42.749 ] 00:12:42.749 }, 00:12:42.749 { 00:12:42.749 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:42.749 "subtype": "NVMe", 00:12:42.749 "listen_addresses": [ 00:12:42.749 { 00:12:42.749 "trtype": "VFIOUSER", 00:12:42.749 "adrfam": "IPv4", 00:12:42.749 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:42.749 "trsvcid": "0" 00:12:42.749 } 00:12:42.749 ], 00:12:42.749 "allow_any_host": true, 00:12:42.749 "hosts": [], 00:12:42.749 "serial_number": "SPDK2", 00:12:42.749 "model_number": "SPDK bdev Controller", 00:12:42.749 "max_namespaces": 32, 00:12:42.749 "min_cntlid": 1, 00:12:42.749 "max_cntlid": 65519, 00:12:42.749 "namespaces": [ 00:12:42.749 { 00:12:42.749 "nsid": 1, 00:12:42.749 "bdev_name": "Malloc2", 00:12:42.749 "name": "Malloc2", 00:12:42.749 "nguid": "BA3D9A75BF074FFA9BD2091DC8172E55", 00:12:42.749 "uuid": "ba3d9a75-bf07-4ffa-9bd2-091dc8172e55" 00:12:42.749 } 00:12:42.749 ] 00:12:42.749 } 00:12:42.749 ] 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1492847 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:42.749 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:42.749 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.010 [2024-07-15 20:48:46.661558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.010 Malloc3 00:12:43.010 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:43.010 [2024-07-15 20:48:46.830662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.010 20:48:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:43.010 Asynchronous Event Request test 00:12:43.010 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:43.010 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:43.010 Registering asynchronous event callbacks... 00:12:43.010 Starting namespace attribute notice tests for all controllers... 00:12:43.010 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:43.010 aer_cb - Changed Namespace 00:12:43.010 Cleaning up... 00:12:43.271 [ 00:12:43.271 { 00:12:43.271 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:43.271 "subtype": "Discovery", 00:12:43.271 "listen_addresses": [], 00:12:43.271 "allow_any_host": true, 00:12:43.271 "hosts": [] 00:12:43.271 }, 00:12:43.271 { 00:12:43.271 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:43.271 "subtype": "NVMe", 00:12:43.271 "listen_addresses": [ 00:12:43.271 { 00:12:43.271 "trtype": "VFIOUSER", 00:12:43.271 "adrfam": "IPv4", 00:12:43.271 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:43.271 "trsvcid": "0" 00:12:43.271 } 00:12:43.271 ], 00:12:43.271 "allow_any_host": true, 00:12:43.271 "hosts": [], 00:12:43.271 "serial_number": "SPDK1", 00:12:43.271 "model_number": "SPDK bdev Controller", 00:12:43.271 "max_namespaces": 32, 00:12:43.271 "min_cntlid": 1, 00:12:43.271 "max_cntlid": 65519, 00:12:43.271 "namespaces": [ 00:12:43.271 { 00:12:43.271 "nsid": 1, 00:12:43.271 "bdev_name": "Malloc1", 00:12:43.271 "name": "Malloc1", 00:12:43.271 "nguid": "63D2E361C2D849648FFF083ECC97BE6D", 00:12:43.271 "uuid": "63d2e361-c2d8-4964-8fff-083ecc97be6d" 00:12:43.271 }, 00:12:43.271 { 00:12:43.271 "nsid": 2, 00:12:43.271 "bdev_name": "Malloc3", 00:12:43.271 "name": "Malloc3", 00:12:43.271 "nguid": "D24061E8356F404FB670F28BD4ADD9AD", 00:12:43.271 "uuid": "d24061e8-356f-404f-b670-f28bd4add9ad" 00:12:43.271 } 00:12:43.271 ] 00:12:43.271 }, 00:12:43.271 { 00:12:43.271 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:43.271 "subtype": "NVMe", 00:12:43.271 "listen_addresses": [ 00:12:43.271 { 00:12:43.271 "trtype": "VFIOUSER", 00:12:43.271 "adrfam": "IPv4", 00:12:43.271 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:43.271 "trsvcid": "0" 00:12:43.271 } 00:12:43.271 ], 00:12:43.271 "allow_any_host": true, 00:12:43.271 "hosts": [], 00:12:43.271 "serial_number": "SPDK2", 00:12:43.271 "model_number": "SPDK bdev Controller", 00:12:43.271 "max_namespaces": 32, 00:12:43.271 "min_cntlid": 1, 00:12:43.271 "max_cntlid": 65519, 00:12:43.271 "namespaces": [ 00:12:43.271 { 00:12:43.271 "nsid": 1, 00:12:43.271 "bdev_name": "Malloc2", 00:12:43.271 "name": "Malloc2", 00:12:43.271 "nguid": "BA3D9A75BF074FFA9BD2091DC8172E55", 00:12:43.271 "uuid": "ba3d9a75-bf07-4ffa-9bd2-091dc8172e55" 00:12:43.271 } 00:12:43.271 ] 00:12:43.271 } 00:12:43.271 ] 00:12:43.271 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1492847 00:12:43.271 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:43.271 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:43.271 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:43.271 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:43.271 [2024-07-15 20:48:47.041448] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:12:43.271 [2024-07-15 20:48:47.041488] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492860 ] 00:12:43.271 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.271 [2024-07-15 20:48:47.072676] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:43.271 [2024-07-15 20:48:47.081354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.271 [2024-07-15 20:48:47.081374] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f16748e3000 00:12:43.271 [2024-07-15 20:48:47.082354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.271 [2024-07-15 20:48:47.083356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.271 [2024-07-15 20:48:47.084362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.271 [2024-07-15 20:48:47.085374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.271 [2024-07-15 20:48:47.086379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.271 [2024-07-15 20:48:47.087389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.272 [2024-07-15 20:48:47.088399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:43.272 [2024-07-15 20:48:47.089408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:43.272 [2024-07-15 20:48:47.090414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:43.272 [2024-07-15 20:48:47.090424] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f16748d8000 00:12:43.272 [2024-07-15 20:48:47.091748] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.272 [2024-07-15 20:48:47.111949] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:43.272 [2024-07-15 20:48:47.111970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:43.272 [2024-07-15 20:48:47.114021] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.272 [2024-07-15 20:48:47.114067] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:43.272 [2024-07-15 20:48:47.114154] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:43.272 [2024-07-15 20:48:47.114168] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:43.272 [2024-07-15 20:48:47.114173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:43.272 [2024-07-15 20:48:47.115028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:43.272 [2024-07-15 20:48:47.115037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:43.272 [2024-07-15 20:48:47.115044] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:43.272 [2024-07-15 20:48:47.116031] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:43.272 [2024-07-15 20:48:47.116041] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:43.272 [2024-07-15 20:48:47.116048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.117046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:43.272 [2024-07-15 20:48:47.117055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.118054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:43.272 [2024-07-15 20:48:47.118063] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:43.272 [2024-07-15 20:48:47.118067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.118074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.118179] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:43.272 [2024-07-15 20:48:47.118184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.118189] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:43.272 [2024-07-15 20:48:47.119063] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:43.272 [2024-07-15 20:48:47.120071] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:43.272 [2024-07-15 20:48:47.121074] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.272 [2024-07-15 20:48:47.122076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.272 [2024-07-15 20:48:47.122113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:43.272 [2024-07-15 20:48:47.123090] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:43.272 [2024-07-15 20:48:47.123098] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:43.272 [2024-07-15 20:48:47.123103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.123126] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:43.272 [2024-07-15 20:48:47.123133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.123146] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.272 [2024-07-15 20:48:47.123151] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.272 [2024-07-15 20:48:47.123162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.272 [2024-07-15 20:48:47.131128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:43.272 [2024-07-15 20:48:47.131139] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:43.272 [2024-07-15 20:48:47.131146] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:43.272 [2024-07-15 20:48:47.131151] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:43.272 [2024-07-15 20:48:47.131156] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:43.272 [2024-07-15 20:48:47.131160] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:43.272 [2024-07-15 20:48:47.131164] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:43.272 [2024-07-15 20:48:47.131169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.131176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.131186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:43.272 [2024-07-15 20:48:47.139128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:43.272 [2024-07-15 20:48:47.139142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.272 [2024-07-15 20:48:47.139151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.272 [2024-07-15 20:48:47.139159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.272 [2024-07-15 20:48:47.139167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.272 [2024-07-15 20:48:47.139172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.139180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.139189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:43.272 [2024-07-15 20:48:47.147126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:43.272 [2024-07-15 20:48:47.147134] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:43.272 [2024-07-15 20:48:47.147139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.147145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.147150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.147159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.272 [2024-07-15 20:48:47.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:43.272 [2024-07-15 20:48:47.155195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.155203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:43.272 [2024-07-15 20:48:47.155211] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:43.272 [2024-07-15 20:48:47.155215] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:43.272 [2024-07-15 20:48:47.155221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:43.534 [2024-07-15 20:48:47.163129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:43.534 [2024-07-15 20:48:47.163141] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:43.534 [2024-07-15 20:48:47.163149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.163156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.163163] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.534 [2024-07-15 20:48:47.163168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.534 [2024-07-15 20:48:47.163174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.534 [2024-07-15 20:48:47.171128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:43.534 [2024-07-15 20:48:47.171141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.171148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.171155] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:43.534 [2024-07-15 20:48:47.171160] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.534 [2024-07-15 20:48:47.171166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.534 [2024-07-15 20:48:47.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:43.534 [2024-07-15 20:48:47.179146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.179153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.179161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.179166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:43.534 [2024-07-15 20:48:47.179171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:43.535 [2024-07-15 20:48:47.179176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:43.535 [2024-07-15 20:48:47.179183] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:43.535 [2024-07-15 20:48:47.179187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:43.535 [2024-07-15 20:48:47.179192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:43.535 [2024-07-15 20:48:47.179210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.187128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.187142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.195127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.195140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.203130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.203144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.211130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.211146] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:43.535 [2024-07-15 20:48:47.211151] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:43.535 [2024-07-15 20:48:47.211154] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:43.535 [2024-07-15 20:48:47.211158] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:43.535 [2024-07-15 20:48:47.211164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:43.535 [2024-07-15 20:48:47.211172] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:43.535 [2024-07-15 20:48:47.211176] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:43.535 [2024-07-15 20:48:47.211182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.211190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:43.535 [2024-07-15 20:48:47.211194] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:43.535 [2024-07-15 20:48:47.211200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.211207] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:43.535 [2024-07-15 20:48:47.211212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:43.535 [2024-07-15 20:48:47.211217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:43.535 [2024-07-15 20:48:47.219129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.219145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.219155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:43.535 [2024-07-15 20:48:47.219164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:43.535 ===================================================== 00:12:43.535 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.535 ===================================================== 00:12:43.535 Controller Capabilities/Features 00:12:43.535 ================================ 00:12:43.535 Vendor ID: 4e58 00:12:43.535 Subsystem Vendor ID: 4e58 00:12:43.535 Serial Number: SPDK2 00:12:43.535 Model Number: SPDK bdev Controller 00:12:43.535 Firmware Version: 24.09 00:12:43.535 Recommended Arb Burst: 6 00:12:43.535 IEEE OUI Identifier: 8d 6b 50 00:12:43.535 Multi-path I/O 00:12:43.535 May have multiple subsystem ports: Yes 00:12:43.535 May have multiple controllers: Yes 00:12:43.535 Associated with SR-IOV VF: No 00:12:43.535 Max Data Transfer Size: 131072 00:12:43.535 Max Number of Namespaces: 32 00:12:43.535 Max Number of I/O Queues: 127 00:12:43.535 NVMe Specification Version (VS): 1.3 00:12:43.535 NVMe Specification Version (Identify): 1.3 00:12:43.535 Maximum Queue Entries: 256 00:12:43.535 Contiguous Queues Required: Yes 00:12:43.535 Arbitration Mechanisms Supported 00:12:43.535 Weighted Round Robin: Not Supported 00:12:43.535 Vendor Specific: Not Supported 00:12:43.535 Reset Timeout: 15000 ms 00:12:43.535 Doorbell Stride: 4 bytes 00:12:43.535 NVM Subsystem Reset: Not Supported 00:12:43.535 Command Sets Supported 00:12:43.535 NVM Command Set: Supported 00:12:43.535 Boot Partition: Not Supported 00:12:43.535 Memory Page Size Minimum: 4096 bytes 00:12:43.535 Memory Page Size Maximum: 4096 bytes 00:12:43.535 Persistent Memory Region: Not Supported 00:12:43.535 Optional Asynchronous Events Supported 00:12:43.535 Namespace Attribute Notices: Supported 00:12:43.535 Firmware Activation Notices: Not Supported 00:12:43.535 ANA Change Notices: Not Supported 00:12:43.535 PLE Aggregate Log Change Notices: Not Supported 00:12:43.535 LBA Status Info Alert Notices: Not Supported 00:12:43.535 EGE Aggregate Log Change Notices: Not Supported 00:12:43.535 Normal NVM Subsystem Shutdown event: Not Supported 00:12:43.535 Zone Descriptor Change Notices: Not Supported 00:12:43.535 Discovery Log Change Notices: Not Supported 00:12:43.535 Controller Attributes 00:12:43.535 128-bit Host Identifier: Supported 00:12:43.535 Non-Operational Permissive Mode: Not Supported 00:12:43.535 NVM Sets: Not Supported 00:12:43.535 Read Recovery Levels: Not Supported 00:12:43.535 Endurance Groups: Not Supported 00:12:43.535 Predictable Latency Mode: Not Supported 00:12:43.535 Traffic Based Keep ALive: Not Supported 00:12:43.535 Namespace Granularity: Not Supported 00:12:43.535 SQ Associations: Not Supported 00:12:43.535 UUID List: Not Supported 00:12:43.535 Multi-Domain Subsystem: Not Supported 00:12:43.535 Fixed Capacity Management: Not Supported 00:12:43.535 Variable Capacity Management: Not Supported 00:12:43.535 Delete Endurance Group: Not Supported 00:12:43.535 Delete NVM Set: Not Supported 00:12:43.535 Extended LBA Formats Supported: Not Supported 00:12:43.535 Flexible Data Placement Supported: Not Supported 00:12:43.535 00:12:43.535 Controller Memory Buffer Support 00:12:43.535 ================================ 00:12:43.535 Supported: No 00:12:43.535 00:12:43.535 Persistent Memory Region Support 00:12:43.535 ================================ 00:12:43.535 Supported: No 00:12:43.535 00:12:43.535 Admin Command Set Attributes 00:12:43.535 ============================ 00:12:43.535 Security Send/Receive: Not Supported 00:12:43.535 Format NVM: Not Supported 00:12:43.535 Firmware Activate/Download: Not Supported 00:12:43.535 Namespace Management: Not Supported 00:12:43.535 Device Self-Test: Not Supported 00:12:43.535 Directives: Not Supported 00:12:43.535 NVMe-MI: Not Supported 00:12:43.535 Virtualization Management: Not Supported 00:12:43.535 Doorbell Buffer Config: Not Supported 00:12:43.535 Get LBA Status Capability: Not Supported 00:12:43.535 Command & Feature Lockdown Capability: Not Supported 00:12:43.535 Abort Command Limit: 4 00:12:43.535 Async Event Request Limit: 4 00:12:43.535 Number of Firmware Slots: N/A 00:12:43.535 Firmware Slot 1 Read-Only: N/A 00:12:43.535 Firmware Activation Without Reset: N/A 00:12:43.535 Multiple Update Detection Support: N/A 00:12:43.535 Firmware Update Granularity: No Information Provided 00:12:43.535 Per-Namespace SMART Log: No 00:12:43.535 Asymmetric Namespace Access Log Page: Not Supported 00:12:43.535 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:43.535 Command Effects Log Page: Supported 00:12:43.535 Get Log Page Extended Data: Supported 00:12:43.535 Telemetry Log Pages: Not Supported 00:12:43.535 Persistent Event Log Pages: Not Supported 00:12:43.535 Supported Log Pages Log Page: May Support 00:12:43.535 Commands Supported & Effects Log Page: Not Supported 00:12:43.535 Feature Identifiers & Effects Log Page:May Support 00:12:43.535 NVMe-MI Commands & Effects Log Page: May Support 00:12:43.535 Data Area 4 for Telemetry Log: Not Supported 00:12:43.535 Error Log Page Entries Supported: 128 00:12:43.535 Keep Alive: Supported 00:12:43.535 Keep Alive Granularity: 10000 ms 00:12:43.535 00:12:43.535 NVM Command Set Attributes 00:12:43.535 ========================== 00:12:43.535 Submission Queue Entry Size 00:12:43.535 Max: 64 00:12:43.535 Min: 64 00:12:43.535 Completion Queue Entry Size 00:12:43.535 Max: 16 00:12:43.535 Min: 16 00:12:43.535 Number of Namespaces: 32 00:12:43.535 Compare Command: Supported 00:12:43.535 Write Uncorrectable Command: Not Supported 00:12:43.535 Dataset Management Command: Supported 00:12:43.535 Write Zeroes Command: Supported 00:12:43.535 Set Features Save Field: Not Supported 00:12:43.535 Reservations: Not Supported 00:12:43.535 Timestamp: Not Supported 00:12:43.535 Copy: Supported 00:12:43.535 Volatile Write Cache: Present 00:12:43.535 Atomic Write Unit (Normal): 1 00:12:43.535 Atomic Write Unit (PFail): 1 00:12:43.535 Atomic Compare & Write Unit: 1 00:12:43.535 Fused Compare & Write: Supported 00:12:43.535 Scatter-Gather List 00:12:43.535 SGL Command Set: Supported (Dword aligned) 00:12:43.535 SGL Keyed: Not Supported 00:12:43.535 SGL Bit Bucket Descriptor: Not Supported 00:12:43.535 SGL Metadata Pointer: Not Supported 00:12:43.535 Oversized SGL: Not Supported 00:12:43.535 SGL Metadata Address: Not Supported 00:12:43.535 SGL Offset: Not Supported 00:12:43.535 Transport SGL Data Block: Not Supported 00:12:43.536 Replay Protected Memory Block: Not Supported 00:12:43.536 00:12:43.536 Firmware Slot Information 00:12:43.536 ========================= 00:12:43.536 Active slot: 1 00:12:43.536 Slot 1 Firmware Revision: 24.09 00:12:43.536 00:12:43.536 00:12:43.536 Commands Supported and Effects 00:12:43.536 ============================== 00:12:43.536 Admin Commands 00:12:43.536 -------------- 00:12:43.536 Get Log Page (02h): Supported 00:12:43.536 Identify (06h): Supported 00:12:43.536 Abort (08h): Supported 00:12:43.536 Set Features (09h): Supported 00:12:43.536 Get Features (0Ah): Supported 00:12:43.536 Asynchronous Event Request (0Ch): Supported 00:12:43.536 Keep Alive (18h): Supported 00:12:43.536 I/O Commands 00:12:43.536 ------------ 00:12:43.536 Flush (00h): Supported LBA-Change 00:12:43.536 Write (01h): Supported LBA-Change 00:12:43.536 Read (02h): Supported 00:12:43.536 Compare (05h): Supported 00:12:43.536 Write Zeroes (08h): Supported LBA-Change 00:12:43.536 Dataset Management (09h): Supported LBA-Change 00:12:43.536 Copy (19h): Supported LBA-Change 00:12:43.536 00:12:43.536 Error Log 00:12:43.536 ========= 00:12:43.536 00:12:43.536 Arbitration 00:12:43.536 =========== 00:12:43.536 Arbitration Burst: 1 00:12:43.536 00:12:43.536 Power Management 00:12:43.536 ================ 00:12:43.536 Number of Power States: 1 00:12:43.536 Current Power State: Power State #0 00:12:43.536 Power State #0: 00:12:43.536 Max Power: 0.00 W 00:12:43.536 Non-Operational State: Operational 00:12:43.536 Entry Latency: Not Reported 00:12:43.536 Exit Latency: Not Reported 00:12:43.536 Relative Read Throughput: 0 00:12:43.536 Relative Read Latency: 0 00:12:43.536 Relative Write Throughput: 0 00:12:43.536 Relative Write Latency: 0 00:12:43.536 Idle Power: Not Reported 00:12:43.536 Active Power: Not Reported 00:12:43.536 Non-Operational Permissive Mode: Not Supported 00:12:43.536 00:12:43.536 Health Information 00:12:43.536 ================== 00:12:43.536 Critical Warnings: 00:12:43.536 Available Spare Space: OK 00:12:43.536 Temperature: OK 00:12:43.536 Device Reliability: OK 00:12:43.536 Read Only: No 00:12:43.536 Volatile Memory Backup: OK 00:12:43.536 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:43.536 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:43.536 Available Spare: 0% 00:12:43.536 Available Sp[2024-07-15 20:48:47.219262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:43.536 [2024-07-15 20:48:47.227129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:43.536 [2024-07-15 20:48:47.227163] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:43.536 [2024-07-15 20:48:47.227172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.536 [2024-07-15 20:48:47.227178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.536 [2024-07-15 20:48:47.227184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.536 [2024-07-15 20:48:47.227191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.536 [2024-07-15 20:48:47.227240] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:43.536 [2024-07-15 20:48:47.227251] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:43.536 [2024-07-15 20:48:47.228247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.536 [2024-07-15 20:48:47.228294] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:43.536 [2024-07-15 20:48:47.228300] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:43.536 [2024-07-15 20:48:47.229249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:43.536 [2024-07-15 20:48:47.229261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:43.536 [2024-07-15 20:48:47.229312] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:43.536 [2024-07-15 20:48:47.230681] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.536 are Threshold: 0% 00:12:43.536 Life Percentage Used: 0% 00:12:43.536 Data Units Read: 0 00:12:43.536 Data Units Written: 0 00:12:43.536 Host Read Commands: 0 00:12:43.536 Host Write Commands: 0 00:12:43.536 Controller Busy Time: 0 minutes 00:12:43.536 Power Cycles: 0 00:12:43.536 Power On Hours: 0 hours 00:12:43.536 Unsafe Shutdowns: 0 00:12:43.536 Unrecoverable Media Errors: 0 00:12:43.536 Lifetime Error Log Entries: 0 00:12:43.536 Warning Temperature Time: 0 minutes 00:12:43.536 Critical Temperature Time: 0 minutes 00:12:43.536 00:12:43.536 Number of Queues 00:12:43.536 ================ 00:12:43.536 Number of I/O Submission Queues: 127 00:12:43.536 Number of I/O Completion Queues: 127 00:12:43.536 00:12:43.536 Active Namespaces 00:12:43.536 ================= 00:12:43.536 Namespace ID:1 00:12:43.536 Error Recovery Timeout: Unlimited 00:12:43.536 Command Set Identifier: NVM (00h) 00:12:43.536 Deallocate: Supported 00:12:43.536 Deallocated/Unwritten Error: Not Supported 00:12:43.536 Deallocated Read Value: Unknown 00:12:43.536 Deallocate in Write Zeroes: Not Supported 00:12:43.536 Deallocated Guard Field: 0xFFFF 00:12:43.536 Flush: Supported 00:12:43.536 Reservation: Supported 00:12:43.536 Namespace Sharing Capabilities: Multiple Controllers 00:12:43.536 Size (in LBAs): 131072 (0GiB) 00:12:43.536 Capacity (in LBAs): 131072 (0GiB) 00:12:43.536 Utilization (in LBAs): 131072 (0GiB) 00:12:43.536 NGUID: BA3D9A75BF074FFA9BD2091DC8172E55 00:12:43.536 UUID: ba3d9a75-bf07-4ffa-9bd2-091dc8172e55 00:12:43.536 Thin Provisioning: Not Supported 00:12:43.536 Per-NS Atomic Units: Yes 00:12:43.536 Atomic Boundary Size (Normal): 0 00:12:43.536 Atomic Boundary Size (PFail): 0 00:12:43.536 Atomic Boundary Offset: 0 00:12:43.536 Maximum Single Source Range Length: 65535 00:12:43.536 Maximum Copy Length: 65535 00:12:43.536 Maximum Source Range Count: 1 00:12:43.536 NGUID/EUI64 Never Reused: No 00:12:43.536 Namespace Write Protected: No 00:12:43.536 Number of LBA Formats: 1 00:12:43.536 Current LBA Format: LBA Format #00 00:12:43.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:43.536 00:12:43.536 20:48:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:43.536 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.536 [2024-07-15 20:48:47.416486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.817 Initializing NVMe Controllers 00:12:48.817 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:48.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:48.817 Initialization complete. Launching workers. 00:12:48.817 ======================================================== 00:12:48.817 Latency(us) 00:12:48.817 Device Information : IOPS MiB/s Average min max 00:12:48.817 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40009.00 156.29 3199.33 840.56 6809.54 00:12:48.817 ======================================================== 00:12:48.817 Total : 40009.00 156.29 3199.33 840.56 6809.54 00:12:48.817 00:12:48.817 [2024-07-15 20:48:52.521317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.817 20:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:48.817 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.817 [2024-07-15 20:48:52.704893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.100 Initializing NVMe Controllers 00:12:54.100 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.100 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:54.100 Initialization complete. Launching workers. 00:12:54.100 ======================================================== 00:12:54.100 Latency(us) 00:12:54.100 Device Information : IOPS MiB/s Average min max 00:12:54.100 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35229.73 137.62 3632.97 1110.94 8181.06 00:12:54.100 ======================================================== 00:12:54.100 Total : 35229.73 137.62 3632.97 1110.94 8181.06 00:12:54.100 00:12:54.100 [2024-07-15 20:48:57.726048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.100 20:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:54.100 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.100 [2024-07-15 20:48:57.915511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.386 [2024-07-15 20:49:03.060211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.386 Initializing NVMe Controllers 00:12:59.386 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:59.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:59.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:59.386 Initialization complete. Launching workers. 00:12:59.386 Starting thread on core 2 00:12:59.386 Starting thread on core 3 00:12:59.386 Starting thread on core 1 00:12:59.386 20:49:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:59.386 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.646 [2024-07-15 20:49:03.314655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.945 [2024-07-15 20:49:06.388603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.945 Initializing NVMe Controllers 00:13:02.945 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.945 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.945 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:02.945 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:02.945 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:02.945 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:02.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:02.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:02.945 Initialization complete. Launching workers. 00:13:02.945 Starting thread on core 1 with urgent priority queue 00:13:02.945 Starting thread on core 2 with urgent priority queue 00:13:02.945 Starting thread on core 3 with urgent priority queue 00:13:02.945 Starting thread on core 0 with urgent priority queue 00:13:02.945 SPDK bdev Controller (SPDK2 ) core 0: 14187.33 IO/s 7.05 secs/100000 ios 00:13:02.945 SPDK bdev Controller (SPDK2 ) core 1: 7862.67 IO/s 12.72 secs/100000 ios 00:13:02.945 SPDK bdev Controller (SPDK2 ) core 2: 7759.00 IO/s 12.89 secs/100000 ios 00:13:02.945 SPDK bdev Controller (SPDK2 ) core 3: 10593.33 IO/s 9.44 secs/100000 ios 00:13:02.945 ======================================================== 00:13:02.945 00:13:02.945 20:49:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:02.945 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.945 [2024-07-15 20:49:06.660611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.945 Initializing NVMe Controllers 00:13:02.945 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.945 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.945 Namespace ID: 1 size: 0GB 00:13:02.945 Initialization complete. 00:13:02.945 INFO: using host memory buffer for IO 00:13:02.945 Hello world! 00:13:02.945 [2024-07-15 20:49:06.670665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.945 20:49:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:02.945 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.206 [2024-07-15 20:49:06.931064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.147 Initializing NVMe Controllers 00:13:04.147 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.147 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.147 Initialization complete. Launching workers. 00:13:04.147 submit (in ns) avg, min, max = 8770.1, 3897.5, 4001515.8 00:13:04.147 complete (in ns) avg, min, max = 19357.6, 2391.7, 5994306.7 00:13:04.147 00:13:04.147 Submit histogram 00:13:04.147 ================ 00:13:04.147 Range in us Cumulative Count 00:13:04.147 3.893 - 3.920: 1.8931% ( 361) 00:13:04.147 3.920 - 3.947: 8.8258% ( 1322) 00:13:04.147 3.947 - 3.973: 18.7949% ( 1901) 00:13:04.147 3.973 - 4.000: 29.9072% ( 2119) 00:13:04.147 4.000 - 4.027: 40.7677% ( 2071) 00:13:04.147 4.027 - 4.053: 52.2261% ( 2185) 00:13:04.147 4.053 - 4.080: 68.1577% ( 3038) 00:13:04.147 4.080 - 4.107: 83.4758% ( 2921) 00:13:04.147 4.107 - 4.133: 93.2036% ( 1855) 00:13:04.147 4.133 - 4.160: 97.5195% ( 823) 00:13:04.147 4.160 - 4.187: 98.9354% ( 270) 00:13:04.147 4.187 - 4.213: 99.3340% ( 76) 00:13:04.147 4.213 - 4.240: 99.4441% ( 21) 00:13:04.147 4.240 - 4.267: 99.4546% ( 2) 00:13:04.147 4.480 - 4.507: 99.4599% ( 1) 00:13:04.147 4.640 - 4.667: 99.4651% ( 1) 00:13:04.147 4.827 - 4.853: 99.4703% ( 1) 00:13:04.147 4.960 - 4.987: 99.4756% ( 1) 00:13:04.147 5.093 - 5.120: 99.4808% ( 1) 00:13:04.147 5.387 - 5.413: 99.4861% ( 1) 00:13:04.147 5.467 - 5.493: 99.4913% ( 1) 00:13:04.147 5.947 - 5.973: 99.4966% ( 1) 00:13:04.147 5.973 - 6.000: 99.5018% ( 1) 00:13:04.147 6.000 - 6.027: 99.5071% ( 1) 00:13:04.147 6.053 - 6.080: 99.5175% ( 2) 00:13:04.147 6.080 - 6.107: 99.5333% ( 3) 00:13:04.147 6.107 - 6.133: 99.5385% ( 1) 00:13:04.147 6.133 - 6.160: 99.5438% ( 1) 00:13:04.147 6.160 - 6.187: 99.5647% ( 4) 00:13:04.147 6.187 - 6.213: 99.6014% ( 7) 00:13:04.147 6.213 - 6.240: 99.6067% ( 1) 00:13:04.147 6.240 - 6.267: 99.6119% ( 1) 00:13:04.147 6.267 - 6.293: 99.6224% ( 2) 00:13:04.147 6.293 - 6.320: 99.6277% ( 1) 00:13:04.147 6.320 - 6.347: 99.6329% ( 1) 00:13:04.147 6.347 - 6.373: 99.6382% ( 1) 00:13:04.147 6.373 - 6.400: 99.6434% ( 1) 00:13:04.147 6.400 - 6.427: 99.6486% ( 1) 00:13:04.147 6.427 - 6.453: 99.6539% ( 1) 00:13:04.147 6.480 - 6.507: 99.6644% ( 2) 00:13:04.147 6.507 - 6.533: 99.6696% ( 1) 00:13:04.147 6.613 - 6.640: 99.6801% ( 2) 00:13:04.147 7.147 - 7.200: 99.6906% ( 2) 00:13:04.147 7.307 - 7.360: 99.6958% ( 1) 00:13:04.147 7.360 - 7.413: 99.7116% ( 3) 00:13:04.147 7.520 - 7.573: 99.7273% ( 3) 00:13:04.147 7.627 - 7.680: 99.7326% ( 1) 00:13:04.148 7.680 - 7.733: 99.7378% ( 1) 00:13:04.148 7.733 - 7.787: 99.7430% ( 1) 00:13:04.148 7.787 - 7.840: 99.7588% ( 3) 00:13:04.148 7.947 - 8.000: 99.7745% ( 3) 00:13:04.148 8.000 - 8.053: 99.7850% ( 2) 00:13:04.148 8.053 - 8.107: 99.7955% ( 2) 00:13:04.148 8.213 - 8.267: 99.8007% ( 1) 00:13:04.148 8.267 - 8.320: 99.8060% ( 1) 00:13:04.148 8.320 - 8.373: 99.8165% ( 2) 00:13:04.148 8.533 - 8.587: 99.8217% ( 1) 00:13:04.148 8.640 - 8.693: 99.8269% ( 1) 00:13:04.148 8.853 - 8.907: 99.8322% ( 1) 00:13:04.148 8.907 - 8.960: 99.8427% ( 2) 00:13:04.148 9.013 - 9.067: 99.8532% ( 2) 00:13:04.148 9.067 - 9.120: 99.8584% ( 1) 00:13:04.148 9.227 - 9.280: 99.8637% ( 1) 00:13:04.148 9.440 - 9.493: 99.8689% ( 1) 00:13:04.148 10.027 - 10.080: 99.8741% ( 1) 00:13:04.148 40.320 - 40.533: 99.8794% ( 1) 00:13:04.148 2020.693 - 2034.347: 99.8846% ( 1) 00:13:04.148 3986.773 - 4014.080: 100.0000% ( 22) 00:13:04.148 00:13:04.148 Complete histogram 00:13:04.148 ================== 00:13:04.148 Range in us Cumulative Count 00:13:04.148 2.387 - 2.400: 0.0105% ( 2) 00:13:04.148 2.400 - 2.413: 0.6188% ( 116) 00:13:04.148 2.413 - 2.427: 0.8181% ( 38) 00:13:04.148 2.427 - 2.440: 1.1432% ( 62) 00:13:04.148 2.440 - 2.453: 50.9308% ( 9494) 00:13:04.148 2.453 - 2.467: 56.7780% ( 1115) 00:13:04.148 2.467 - 2.480: 72.4160% ( 2982) 00:13:04.148 2.480 - 2.493: 79.2700% ( 1307) 00:13:04.148 2.493 - 2.507: 81.4044% ( 407) 00:13:04.148 2.507 - [2024-07-15 20:49:08.027811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.408 2.520: 85.3689% ( 756) 00:13:04.408 2.520 - 2.533: 90.8962% ( 1054) 00:13:04.408 2.533 - 2.547: 94.5723% ( 701) 00:13:04.408 2.547 - 2.560: 96.9007% ( 444) 00:13:04.408 2.560 - 2.573: 98.4477% ( 295) 00:13:04.408 2.573 - 2.587: 99.0928% ( 123) 00:13:04.408 2.587 - 2.600: 99.2081% ( 22) 00:13:04.408 2.600 - 2.613: 99.2448% ( 7) 00:13:04.408 2.613 - 2.627: 99.2501% ( 1) 00:13:04.408 4.347 - 4.373: 99.2553% ( 1) 00:13:04.408 4.427 - 4.453: 99.2606% ( 1) 00:13:04.408 4.480 - 4.507: 99.2658% ( 1) 00:13:04.408 4.560 - 4.587: 99.2868% ( 4) 00:13:04.408 4.613 - 4.640: 99.2920% ( 1) 00:13:04.408 4.640 - 4.667: 99.2973% ( 1) 00:13:04.408 4.667 - 4.693: 99.3025% ( 1) 00:13:04.408 4.747 - 4.773: 99.3130% ( 2) 00:13:04.408 5.707 - 5.733: 99.3183% ( 1) 00:13:04.408 5.733 - 5.760: 99.3235% ( 1) 00:13:04.408 5.760 - 5.787: 99.3340% ( 2) 00:13:04.408 5.840 - 5.867: 99.3392% ( 1) 00:13:04.408 5.867 - 5.893: 99.3445% ( 1) 00:13:04.408 5.893 - 5.920: 99.3602% ( 3) 00:13:04.408 5.947 - 5.973: 99.3655% ( 1) 00:13:04.408 6.000 - 6.027: 99.3707% ( 1) 00:13:04.408 6.027 - 6.053: 99.3969% ( 5) 00:13:04.408 6.053 - 6.080: 99.4022% ( 1) 00:13:04.408 6.107 - 6.133: 99.4074% ( 1) 00:13:04.408 6.160 - 6.187: 99.4284% ( 4) 00:13:04.408 6.187 - 6.213: 99.4389% ( 2) 00:13:04.408 6.213 - 6.240: 99.4494% ( 2) 00:13:04.408 6.293 - 6.320: 99.4546% ( 1) 00:13:04.408 6.320 - 6.347: 99.4651% ( 2) 00:13:04.408 6.427 - 6.453: 99.4703% ( 1) 00:13:04.408 6.533 - 6.560: 99.4756% ( 1) 00:13:04.408 6.613 - 6.640: 99.4808% ( 1) 00:13:04.408 6.667 - 6.693: 99.4913% ( 2) 00:13:04.408 6.827 - 6.880: 99.4966% ( 1) 00:13:04.408 6.933 - 6.987: 99.5018% ( 1) 00:13:04.408 6.987 - 7.040: 99.5071% ( 1) 00:13:04.408 7.040 - 7.093: 99.5123% ( 1) 00:13:04.408 7.093 - 7.147: 99.5175% ( 1) 00:13:04.408 7.147 - 7.200: 99.5228% ( 1) 00:13:04.409 7.467 - 7.520: 99.5280% ( 1) 00:13:04.409 7.627 - 7.680: 99.5333% ( 1) 00:13:04.409 7.893 - 7.947: 99.5385% ( 1) 00:13:04.409 8.373 - 8.427: 99.5438% ( 1) 00:13:04.409 10.827 - 10.880: 99.5490% ( 1) 00:13:04.409 11.360 - 11.413: 99.5543% ( 1) 00:13:04.409 12.533 - 12.587: 99.5595% ( 1) 00:13:04.409 13.547 - 13.600: 99.5647% ( 1) 00:13:04.409 15.147 - 15.253: 99.5700% ( 1) 00:13:04.409 48.213 - 48.427: 99.5752% ( 1) 00:13:04.409 149.333 - 150.187: 99.5805% ( 1) 00:13:04.409 3986.773 - 4014.080: 99.9948% ( 79) 00:13:04.409 5980.160 - 6007.467: 100.0000% ( 1) 00:13:04.409 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.409 [ 00:13:04.409 { 00:13:04.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.409 "subtype": "Discovery", 00:13:04.409 "listen_addresses": [], 00:13:04.409 "allow_any_host": true, 00:13:04.409 "hosts": [] 00:13:04.409 }, 00:13:04.409 { 00:13:04.409 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.409 "subtype": "NVMe", 00:13:04.409 "listen_addresses": [ 00:13:04.409 { 00:13:04.409 "trtype": "VFIOUSER", 00:13:04.409 "adrfam": "IPv4", 00:13:04.409 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.409 "trsvcid": "0" 00:13:04.409 } 00:13:04.409 ], 00:13:04.409 "allow_any_host": true, 00:13:04.409 "hosts": [], 00:13:04.409 "serial_number": "SPDK1", 00:13:04.409 "model_number": "SPDK bdev Controller", 00:13:04.409 "max_namespaces": 32, 00:13:04.409 "min_cntlid": 1, 00:13:04.409 "max_cntlid": 65519, 00:13:04.409 "namespaces": [ 00:13:04.409 { 00:13:04.409 "nsid": 1, 00:13:04.409 "bdev_name": "Malloc1", 00:13:04.409 "name": "Malloc1", 00:13:04.409 "nguid": "63D2E361C2D849648FFF083ECC97BE6D", 00:13:04.409 "uuid": "63d2e361-c2d8-4964-8fff-083ecc97be6d" 00:13:04.409 }, 00:13:04.409 { 00:13:04.409 "nsid": 2, 00:13:04.409 "bdev_name": "Malloc3", 00:13:04.409 "name": "Malloc3", 00:13:04.409 "nguid": "D24061E8356F404FB670F28BD4ADD9AD", 00:13:04.409 "uuid": "d24061e8-356f-404f-b670-f28bd4add9ad" 00:13:04.409 } 00:13:04.409 ] 00:13:04.409 }, 00:13:04.409 { 00:13:04.409 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.409 "subtype": "NVMe", 00:13:04.409 "listen_addresses": [ 00:13:04.409 { 00:13:04.409 "trtype": "VFIOUSER", 00:13:04.409 "adrfam": "IPv4", 00:13:04.409 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.409 "trsvcid": "0" 00:13:04.409 } 00:13:04.409 ], 00:13:04.409 "allow_any_host": true, 00:13:04.409 "hosts": [], 00:13:04.409 "serial_number": "SPDK2", 00:13:04.409 "model_number": "SPDK bdev Controller", 00:13:04.409 "max_namespaces": 32, 00:13:04.409 "min_cntlid": 1, 00:13:04.409 "max_cntlid": 65519, 00:13:04.409 "namespaces": [ 00:13:04.409 { 00:13:04.409 "nsid": 1, 00:13:04.409 "bdev_name": "Malloc2", 00:13:04.409 "name": "Malloc2", 00:13:04.409 "nguid": "BA3D9A75BF074FFA9BD2091DC8172E55", 00:13:04.409 "uuid": "ba3d9a75-bf07-4ffa-9bd2-091dc8172e55" 00:13:04.409 } 00:13:04.409 ] 00:13:04.409 } 00:13:04.409 ] 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1497094 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:04.409 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:04.669 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.669 Malloc4 00:13:04.669 [2024-07-15 20:49:08.416514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.669 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:04.962 [2024-07-15 20:49:08.584591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.962 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.962 Asynchronous Event Request test 00:13:04.963 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.963 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.963 Registering asynchronous event callbacks... 00:13:04.963 Starting namespace attribute notice tests for all controllers... 00:13:04.963 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.963 aer_cb - Changed Namespace 00:13:04.963 Cleaning up... 00:13:04.963 [ 00:13:04.963 { 00:13:04.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.963 "subtype": "Discovery", 00:13:04.963 "listen_addresses": [], 00:13:04.963 "allow_any_host": true, 00:13:04.963 "hosts": [] 00:13:04.963 }, 00:13:04.963 { 00:13:04.963 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.963 "subtype": "NVMe", 00:13:04.963 "listen_addresses": [ 00:13:04.963 { 00:13:04.963 "trtype": "VFIOUSER", 00:13:04.963 "adrfam": "IPv4", 00:13:04.963 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.963 "trsvcid": "0" 00:13:04.963 } 00:13:04.963 ], 00:13:04.963 "allow_any_host": true, 00:13:04.963 "hosts": [], 00:13:04.963 "serial_number": "SPDK1", 00:13:04.963 "model_number": "SPDK bdev Controller", 00:13:04.963 "max_namespaces": 32, 00:13:04.963 "min_cntlid": 1, 00:13:04.963 "max_cntlid": 65519, 00:13:04.963 "namespaces": [ 00:13:04.963 { 00:13:04.963 "nsid": 1, 00:13:04.963 "bdev_name": "Malloc1", 00:13:04.963 "name": "Malloc1", 00:13:04.963 "nguid": "63D2E361C2D849648FFF083ECC97BE6D", 00:13:04.963 "uuid": "63d2e361-c2d8-4964-8fff-083ecc97be6d" 00:13:04.963 }, 00:13:04.963 { 00:13:04.963 "nsid": 2, 00:13:04.963 "bdev_name": "Malloc3", 00:13:04.963 "name": "Malloc3", 00:13:04.963 "nguid": "D24061E8356F404FB670F28BD4ADD9AD", 00:13:04.963 "uuid": "d24061e8-356f-404f-b670-f28bd4add9ad" 00:13:04.963 } 00:13:04.963 ] 00:13:04.963 }, 00:13:04.963 { 00:13:04.963 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.963 "subtype": "NVMe", 00:13:04.963 "listen_addresses": [ 00:13:04.963 { 00:13:04.963 "trtype": "VFIOUSER", 00:13:04.963 "adrfam": "IPv4", 00:13:04.963 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.963 "trsvcid": "0" 00:13:04.963 } 00:13:04.963 ], 00:13:04.963 "allow_any_host": true, 00:13:04.963 "hosts": [], 00:13:04.963 "serial_number": "SPDK2", 00:13:04.963 "model_number": "SPDK bdev Controller", 00:13:04.963 "max_namespaces": 32, 00:13:04.963 "min_cntlid": 1, 00:13:04.963 "max_cntlid": 65519, 00:13:04.963 "namespaces": [ 00:13:04.963 { 00:13:04.963 "nsid": 1, 00:13:04.963 "bdev_name": "Malloc2", 00:13:04.963 "name": "Malloc2", 00:13:04.963 "nguid": "BA3D9A75BF074FFA9BD2091DC8172E55", 00:13:04.963 "uuid": "ba3d9a75-bf07-4ffa-9bd2-091dc8172e55" 00:13:04.963 }, 00:13:04.963 { 00:13:04.963 "nsid": 2, 00:13:04.963 "bdev_name": "Malloc4", 00:13:04.963 "name": "Malloc4", 00:13:04.963 "nguid": "BA8D2E1ABBF942808577656EE98E02A3", 00:13:04.963 "uuid": "ba8d2e1a-bbf9-4280-8577-656ee98e02a3" 00:13:04.963 } 00:13:04.963 ] 00:13:04.963 } 00:13:04.963 ] 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1497094 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1488120 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1488120 ']' 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1488120 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1488120 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1488120' 00:13:04.963 killing process with pid 1488120 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1488120 00:13:04.963 20:49:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1488120 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1497230 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1497230' 00:13:05.224 Process pid: 1497230 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1497230 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1497230 ']' 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.224 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:05.224 [2024-07-15 20:49:09.064678] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:05.224 [2024-07-15 20:49:09.065568] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:13:05.224 [2024-07-15 20:49:09.065608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.224 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.485 [2024-07-15 20:49:09.127386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.485 [2024-07-15 20:49:09.191619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.485 [2024-07-15 20:49:09.191654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.485 [2024-07-15 20:49:09.191662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.485 [2024-07-15 20:49:09.191669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.485 [2024-07-15 20:49:09.191674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.485 [2024-07-15 20:49:09.191802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.485 [2024-07-15 20:49:09.191936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.485 [2024-07-15 20:49:09.192093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.485 [2024-07-15 20:49:09.192094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.485 [2024-07-15 20:49:09.256526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:05.485 [2024-07-15 20:49:09.256595] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:05.485 [2024-07-15 20:49:09.257608] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:05.485 [2024-07-15 20:49:09.258026] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:05.485 [2024-07-15 20:49:09.258134] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:06.057 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.057 20:49:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:06.057 20:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:06.996 20:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:07.256 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:07.256 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:07.256 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:07.256 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:07.256 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:07.515 Malloc1 00:13:07.515 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:07.515 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:07.775 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:08.035 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:08.035 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:08.035 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:08.035 Malloc2 00:13:08.035 20:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:08.295 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1497230 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1497230 ']' 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1497230 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1497230 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1497230' 00:13:08.556 killing process with pid 1497230 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1497230 00:13:08.556 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1497230 00:13:08.815 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:08.815 20:49:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:08.815 00:13:08.815 real 0m50.602s 00:13:08.815 user 3m20.575s 00:13:08.815 sys 0m2.983s 00:13:08.815 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.815 20:49:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:08.815 ************************************ 00:13:08.815 END TEST nvmf_vfio_user 00:13:08.815 ************************************ 00:13:08.815 20:49:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.815 20:49:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:08.815 20:49:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.815 20:49:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.815 20:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.815 ************************************ 00:13:08.815 START TEST nvmf_vfio_user_nvme_compliance 00:13:08.815 ************************************ 00:13:08.815 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:09.076 * Looking for test storage... 00:13:09.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1497979 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1497979' 00:13:09.076 Process pid: 1497979 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:09.076 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1497979 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1497979 ']' 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.077 20:49:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.077 [2024-07-15 20:49:12.855092] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:13:09.077 [2024-07-15 20:49:12.855165] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.077 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.077 [2024-07-15 20:49:12.918019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.336 [2024-07-15 20:49:12.982706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.336 [2024-07-15 20:49:12.982745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.337 [2024-07-15 20:49:12.982753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.337 [2024-07-15 20:49:12.982759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.337 [2024-07-15 20:49:12.982768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.337 [2024-07-15 20:49:12.982910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.337 [2024-07-15 20:49:12.983020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.337 [2024-07-15 20:49:12.983023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.906 20:49:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.906 20:49:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:09.906 20:49:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.848 malloc0 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.848 20:49:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:11.116 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.116 00:13:11.116 00:13:11.116 CUnit - A unit testing framework for C - Version 2.1-3 00:13:11.116 http://cunit.sourceforge.net/ 00:13:11.116 00:13:11.116 00:13:11.116 Suite: nvme_compliance 00:13:11.116 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:49:14.900647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.116 [2024-07-15 20:49:14.901976] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:11.116 [2024-07-15 20:49:14.901987] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:11.116 [2024-07-15 20:49:14.901994] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:11.116 [2024-07-15 20:49:14.903661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.116 passed 00:13:11.116 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:49:15.004265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.116 [2024-07-15 20:49:15.007279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.376 passed 00:13:11.376 Test: admin_identify_ns ...[2024-07-15 20:49:15.105456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.376 [2024-07-15 20:49:15.165137] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:11.376 [2024-07-15 20:49:15.173137] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:11.376 [2024-07-15 20:49:15.194248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.376 passed 00:13:11.637 Test: admin_get_features_mandatory_features ...[2024-07-15 20:49:15.289344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.637 [2024-07-15 20:49:15.292360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.637 passed 00:13:11.637 Test: admin_get_features_optional_features ...[2024-07-15 20:49:15.390916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.637 [2024-07-15 20:49:15.393932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.637 passed 00:13:11.637 Test: admin_set_features_number_of_queues ...[2024-07-15 20:49:15.490561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.897 [2024-07-15 20:49:15.599242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.897 passed 00:13:11.897 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:49:15.693870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.897 [2024-07-15 20:49:15.696881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.897 passed 00:13:12.158 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:49:15.796395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.158 [2024-07-15 20:49:15.865131] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:12.158 [2024-07-15 20:49:15.878198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.158 passed 00:13:12.158 Test: fabric_property_get ...[2024-07-15 20:49:15.974348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.158 [2024-07-15 20:49:15.975587] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:12.158 [2024-07-15 20:49:15.977363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.158 passed 00:13:12.418 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:49:16.073023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.418 [2024-07-15 20:49:16.074277] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:12.418 [2024-07-15 20:49:16.076041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.418 passed 00:13:12.418 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:49:16.175378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.418 [2024-07-15 20:49:16.260130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.418 [2024-07-15 20:49:16.276130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.418 [2024-07-15 20:49:16.281213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.678 passed 00:13:12.678 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:49:16.376392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.678 [2024-07-15 20:49:16.377638] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:12.678 [2024-07-15 20:49:16.379413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.678 passed 00:13:12.678 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:49:16.476387] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.678 [2024-07-15 20:49:16.556131] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:12.939 [2024-07-15 20:49:16.580130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.939 [2024-07-15 20:49:16.585214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.939 passed 00:13:12.939 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:49:16.680393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.939 [2024-07-15 20:49:16.681635] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:12.939 [2024-07-15 20:49:16.681659] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:12.939 [2024-07-15 20:49:16.683412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.939 passed 00:13:12.939 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:49:16.777522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:13.199 [2024-07-15 20:49:16.868515] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:13.199 [2024-07-15 20:49:16.876129] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:13.199 [2024-07-15 20:49:16.884131] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:13.199 [2024-07-15 20:49:16.892129] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:13.199 [2024-07-15 20:49:16.921217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:13.199 passed 00:13:13.199 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:49:17.015232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:13.199 [2024-07-15 20:49:17.034138] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:13.199 [2024-07-15 20:49:17.051402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:13.199 passed 00:13:13.460 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:49:17.145932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.400 [2024-07-15 20:49:18.264135] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:14.969 [2024-07-15 20:49:18.651582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.969 passed 00:13:14.969 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:49:18.747401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.229 [2024-07-15 20:49:18.880129] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:15.229 [2024-07-15 20:49:18.917198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.229 passed 00:13:15.229 00:13:15.229 Run Summary: Type Total Ran Passed Failed Inactive 00:13:15.229 suites 1 1 n/a 0 0 00:13:15.229 tests 18 18 18 0 0 00:13:15.229 asserts 360 360 360 0 n/a 00:13:15.229 00:13:15.229 Elapsed time = 1.689 seconds 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1497979 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1497979 ']' 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1497979 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.229 20:49:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1497979 00:13:15.229 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.229 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.229 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1497979' 00:13:15.229 killing process with pid 1497979 00:13:15.229 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1497979 00:13:15.229 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1497979 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:15.490 00:13:15.490 real 0m6.510s 00:13:15.490 user 0m18.645s 00:13:15.490 sys 0m0.464s 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.490 ************************************ 00:13:15.490 END TEST nvmf_vfio_user_nvme_compliance 00:13:15.490 ************************************ 00:13:15.490 20:49:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.490 20:49:19 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:15.490 20:49:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.490 20:49:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.490 20:49:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.490 ************************************ 00:13:15.490 START TEST nvmf_vfio_user_fuzz 00:13:15.490 ************************************ 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:15.490 * Looking for test storage... 00:13:15.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.490 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.491 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1499380 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1499380' 00:13:15.751 Process pid: 1499380 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1499380 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1499380 ']' 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.751 20:49:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.691 20:49:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.691 20:49:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:16.691 20:49:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:17.630 malloc0 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:17.630 20:49:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:49.773 Fuzzing completed. Shutting down the fuzz application 00:13:49.773 00:13:49.773 Dumping successful admin opcodes: 00:13:49.773 8, 9, 10, 24, 00:13:49.773 Dumping successful io opcodes: 00:13:49.773 0, 00:13:49.773 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1267644, total successful commands: 4974, random_seed: 3192764736 00:13:49.773 NS: 0x200003a1ef00 admin qp, Total commands completed: 159416, total successful commands: 1286, random_seed: 391296320 00:13:49.773 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:49.773 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.773 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.773 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.773 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1499380 ']' 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1499380' 00:13:49.774 killing process with pid 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1499380 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:49.774 00:13:49.774 real 0m33.713s 00:13:49.774 user 0m40.735s 00:13:49.774 sys 0m23.594s 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.774 20:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 ************************************ 00:13:49.774 END TEST nvmf_vfio_user_fuzz 00:13:49.774 ************************************ 00:13:49.774 20:49:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.774 20:49:53 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:49.774 20:49:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.774 20:49:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.774 20:49:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.774 ************************************ 00:13:49.774 START TEST nvmf_host_management 00:13:49.774 ************************************ 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:49.774 * Looking for test storage... 00:13:49.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.774 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.775 20:49:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:56.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:56.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.363 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:56.364 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:56.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.364 20:49:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.364 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:13:56.625 00:13:56.625 --- 10.0.0.2 ping statistics --- 00:13:56.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.625 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:13:56.625 00:13:56.625 --- 10.0.0.1 ping statistics --- 00:13:56.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.625 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1509697 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1509697 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1509697 ']' 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.625 20:50:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:56.625 [2024-07-15 20:50:00.372244] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:13:56.625 [2024-07-15 20:50:00.372309] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.625 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.625 [2024-07-15 20:50:00.462838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.885 [2024-07-15 20:50:00.559789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.885 [2024-07-15 20:50:00.559848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.885 [2024-07-15 20:50:00.559856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.885 [2024-07-15 20:50:00.559862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.885 [2024-07-15 20:50:00.559868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.885 [2024-07-15 20:50:00.560003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.885 [2024-07-15 20:50:00.560178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.885 [2024-07-15 20:50:00.560409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.885 [2024-07-15 20:50:00.560410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 [2024-07-15 20:50:01.200623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 Malloc0 00:13:57.457 [2024-07-15 20:50:01.263805] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1509754 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1509754 /var/tmp/bdevperf.sock 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1509754 ']' 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:57.457 { 00:13:57.457 "params": { 00:13:57.457 "name": "Nvme$subsystem", 00:13:57.457 "trtype": "$TEST_TRANSPORT", 00:13:57.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.457 "adrfam": "ipv4", 00:13:57.457 "trsvcid": "$NVMF_PORT", 00:13:57.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.457 "hdgst": ${hdgst:-false}, 00:13:57.457 "ddgst": ${ddgst:-false} 00:13:57.457 }, 00:13:57.457 "method": "bdev_nvme_attach_controller" 00:13:57.457 } 00:13:57.457 EOF 00:13:57.457 )") 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:57.457 20:50:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:57.457 "params": { 00:13:57.457 "name": "Nvme0", 00:13:57.457 "trtype": "tcp", 00:13:57.457 "traddr": "10.0.0.2", 00:13:57.457 "adrfam": "ipv4", 00:13:57.457 "trsvcid": "4420", 00:13:57.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:57.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:57.457 "hdgst": false, 00:13:57.457 "ddgst": false 00:13:57.457 }, 00:13:57.457 "method": "bdev_nvme_attach_controller" 00:13:57.457 }' 00:13:57.740 [2024-07-15 20:50:01.371824] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:13:57.740 [2024-07-15 20:50:01.371874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509754 ] 00:13:57.740 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.740 [2024-07-15 20:50:01.431657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.740 [2024-07-15 20:50:01.496660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.999 Running I/O for 10 seconds... 00:13:58.259 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.259 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:58.259 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:58.259 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.259 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.521 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 [2024-07-15 20:50:02.230927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.230999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.231063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229de40 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.234239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.521 [2024-07-15 20:50:02.234276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.234287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.521 [2024-07-15 20:50:02.234294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.234303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.521 [2024-07-15 20:50:02.234310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.234318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.521 [2024-07-15 20:50:02.234325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.234332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b63b0 is same with the state(5) to be set 00:13:58.521 [2024-07-15 20:50:02.235532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.521 [2024-07-15 20:50:02.235926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.521 [2024-07-15 20:50:02.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.235943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.522 [2024-07-15 20:50:02.235951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.235967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.235975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.235984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.235992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:58.522 [2024-07-15 20:50:02.236350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.522 [2024-07-15 20:50:02.236644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.522 [2024-07-15 20:50:02.236652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.522 [2024-07-15 20:50:02.236702] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac74f0 was disconnected and freed. reset controller. 00:13:58.523 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:58.523 [2024-07-15 20:50:02.237875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:58.523 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:58.523 00:13:58.523 Latency(us) 00:13:58.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.523 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:58.523 Job: Nvme0n1 ended in about 0.56 seconds with error 00:13:58.523 Verification LBA range: start 0x0 length 0x400 00:13:58.523 Nvme0n1 : 0.56 1035.25 64.70 115.03 0.00 54386.90 1556.48 46530.56 00:13:58.523 =================================================================================================================== 00:13:58.523 Total : 1035.25 64.70 115.03 0.00 54386.90 1556.48 46530.56 00:13:58.523 [2024-07-15 20:50:02.239949] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.523 [2024-07-15 20:50:02.239971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b63b0 (9): Bad file descriptor 00:13:58.523 20:50:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.523 20:50:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:58.523 [2024-07-15 20:50:02.249303] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1509754 00:13:59.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1509754) - No such process 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:59.465 { 00:13:59.465 "params": { 00:13:59.465 "name": "Nvme$subsystem", 00:13:59.465 "trtype": "$TEST_TRANSPORT", 00:13:59.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.465 "adrfam": "ipv4", 00:13:59.465 "trsvcid": "$NVMF_PORT", 00:13:59.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.465 "hdgst": ${hdgst:-false}, 00:13:59.465 "ddgst": ${ddgst:-false} 00:13:59.465 }, 00:13:59.465 "method": "bdev_nvme_attach_controller" 00:13:59.465 } 00:13:59.465 EOF 00:13:59.465 )") 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:59.465 20:50:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:59.465 "params": { 00:13:59.465 "name": "Nvme0", 00:13:59.465 "trtype": "tcp", 00:13:59.465 "traddr": "10.0.0.2", 00:13:59.465 "adrfam": "ipv4", 00:13:59.465 "trsvcid": "4420", 00:13:59.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:59.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:59.465 "hdgst": false, 00:13:59.465 "ddgst": false 00:13:59.465 }, 00:13:59.465 "method": "bdev_nvme_attach_controller" 00:13:59.465 }' 00:13:59.465 [2024-07-15 20:50:03.306541] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:13:59.465 [2024-07-15 20:50:03.306596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510187 ] 00:13:59.465 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.725 [2024-07-15 20:50:03.364938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.725 [2024-07-15 20:50:03.428680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.725 Running I/O for 1 seconds... 00:14:01.107 00:14:01.107 Latency(us) 00:14:01.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:01.107 Verification LBA range: start 0x0 length 0x400 00:14:01.107 Nvme0n1 : 1.05 1095.39 68.46 0.00 0.00 57568.73 12888.75 48278.19 00:14:01.107 =================================================================================================================== 00:14:01.107 Total : 1095.39 68.46 0.00 0.00 57568.73 12888.75 48278.19 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.107 rmmod nvme_tcp 00:14:01.107 rmmod nvme_fabrics 00:14:01.107 rmmod nvme_keyring 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1509697 ']' 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1509697 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1509697 ']' 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1509697 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1509697 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1509697' 00:14:01.107 killing process with pid 1509697 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1509697 00:14:01.107 20:50:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1509697 00:14:01.368 [2024-07-15 20:50:05.008742] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.368 20:50:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.283 20:50:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.283 20:50:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:03.283 00:14:03.283 real 0m14.066s 00:14:03.283 user 0m22.462s 00:14:03.283 sys 0m6.182s 00:14:03.283 20:50:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.283 20:50:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.283 ************************************ 00:14:03.283 END TEST nvmf_host_management 00:14:03.283 ************************************ 00:14:03.283 20:50:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:03.283 20:50:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:03.283 20:50:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.283 20:50:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.283 20:50:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.543 ************************************ 00:14:03.543 START TEST nvmf_lvol 00:14:03.543 ************************************ 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:03.544 * Looking for test storage... 00:14:03.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.544 20:50:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:10.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:10.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:10.137 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:10.137 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.137 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:14:10.138 00:14:10.138 --- 10.0.0.2 ping statistics --- 00:14:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.138 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:14:10.138 00:14:10.138 --- 10.0.0.1 ping statistics --- 00:14:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.138 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1514631 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1514631 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1514631 ']' 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.138 20:50:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:10.435 [2024-07-15 20:50:14.034256] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:14:10.435 [2024-07-15 20:50:14.034321] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.435 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.435 [2024-07-15 20:50:14.104888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.435 [2024-07-15 20:50:14.180032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.435 [2024-07-15 20:50:14.180071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.435 [2024-07-15 20:50:14.180079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.435 [2024-07-15 20:50:14.180085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.435 [2024-07-15 20:50:14.180090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.435 [2024-07-15 20:50:14.180177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.435 [2024-07-15 20:50:14.180320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.435 [2024-07-15 20:50:14.180323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.006 20:50:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.266 [2024-07-15 20:50:14.992433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.266 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.526 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:11.526 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:11.526 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:11.526 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:11.787 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:12.048 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=84b70411-3663-4c93-a359-90f65814a61f 00:14:12.048 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84b70411-3663-4c93-a359-90f65814a61f lvol 20 00:14:12.048 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e2764a0c-60a7-4a6c-b2a6-eb24b9fa29ed 00:14:12.048 20:50:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:12.309 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2764a0c-60a7-4a6c-b2a6-eb24b9fa29ed 00:14:12.309 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:12.570 [2024-07-15 20:50:16.339972] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.570 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.829 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1515135 00:14:12.830 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:12.830 20:50:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:12.830 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.771 20:50:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e2764a0c-60a7-4a6c-b2a6-eb24b9fa29ed MY_SNAPSHOT 00:14:14.031 20:50:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1092331a-c993-4fb1-97f6-65a1e62edca7 00:14:14.031 20:50:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e2764a0c-60a7-4a6c-b2a6-eb24b9fa29ed 30 00:14:14.031 20:50:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1092331a-c993-4fb1-97f6-65a1e62edca7 MY_CLONE 00:14:14.292 20:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bbc8ab4c-3779-4fde-8eb0-f79918c76b5d 00:14:14.292 20:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bbc8ab4c-3779-4fde-8eb0-f79918c76b5d 00:14:14.552 20:50:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1515135 00:14:24.549 Initializing NVMe Controllers 00:14:24.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:24.549 Controller IO queue size 128, less than required. 00:14:24.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:24.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:24.549 Initialization complete. Launching workers. 00:14:24.549 ======================================================== 00:14:24.549 Latency(us) 00:14:24.549 Device Information : IOPS MiB/s Average min max 00:14:24.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12407.46 48.47 10322.25 1642.00 59133.83 00:14:24.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18112.15 70.75 7067.23 617.54 44858.09 00:14:24.549 ======================================================== 00:14:24.549 Total : 30519.61 119.22 8390.53 617.54 59133.83 00:14:24.549 00:14:24.549 20:50:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2764a0c-60a7-4a6c-b2a6-eb24b9fa29ed 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84b70411-3663-4c93-a359-90f65814a61f 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.549 rmmod nvme_tcp 00:14:24.549 rmmod nvme_fabrics 00:14:24.549 rmmod nvme_keyring 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1514631 ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1514631 ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1514631' 00:14:24.549 killing process with pid 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1514631 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.549 20:50:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.931 20:50:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.931 00:14:25.931 real 0m22.539s 00:14:25.931 user 1m3.355s 00:14:25.931 sys 0m7.306s 00:14:25.931 20:50:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.931 20:50:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.931 ************************************ 00:14:25.931 END TEST nvmf_lvol 00:14:25.931 ************************************ 00:14:25.931 20:50:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.931 20:50:29 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:25.931 20:50:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.931 20:50:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.931 20:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.931 ************************************ 00:14:25.931 START TEST nvmf_lvs_grow 00:14:25.931 ************************************ 00:14:25.931 20:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:26.191 * Looking for test storage... 00:14:26.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.191 20:50:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:34.328 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:34.329 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:34.329 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:34.329 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:34.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.329 20:50:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:14:34.329 00:14:34.329 --- 10.0.0.2 ping statistics --- 00:14:34.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.329 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:14:34.329 00:14:34.329 --- 10.0.0.1 ping statistics --- 00:14:34.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.329 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1521470 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1521470 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1521470 ']' 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.329 [2024-07-15 20:50:37.175134] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:14:34.329 [2024-07-15 20:50:37.175198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.329 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.329 [2024-07-15 20:50:37.244909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.329 [2024-07-15 20:50:37.317465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.329 [2024-07-15 20:50:37.317500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.329 [2024-07-15 20:50:37.317507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.329 [2024-07-15 20:50:37.317513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.329 [2024-07-15 20:50:37.317519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.329 [2024-07-15 20:50:37.317539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.329 20:50:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:34.329 [2024-07-15 20:50:38.120469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.329 ************************************ 00:14:34.329 START TEST lvs_grow_clean 00:14:34.329 ************************************ 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:34.329 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:34.330 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.330 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.330 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.589 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:34.589 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:34.849 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 lvol 150 00:14:35.110 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e8c94b6-fe8b-40bf-acc3-6d285c67f590 00:14:35.110 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.110 20:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:35.110 [2024-07-15 20:50:38.990188] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:35.110 [2024-07-15 20:50:38.990241] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:35.110 true 00:14:35.370 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:35.370 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:35.370 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:35.370 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:35.631 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e8c94b6-fe8b-40bf-acc3-6d285c67f590 00:14:35.631 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:35.892 [2024-07-15 20:50:39.608102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1521945 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1521945 /var/tmp/bdevperf.sock 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1521945 ']' 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.892 20:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:36.156 [2024-07-15 20:50:39.832608] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:14:36.156 [2024-07-15 20:50:39.832671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521945 ] 00:14:36.156 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.156 [2024-07-15 20:50:39.909513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.156 [2024-07-15 20:50:39.973648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.757 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.757 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:36.757 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:37.018 Nvme0n1 00:14:37.018 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:37.278 [ 00:14:37.279 { 00:14:37.279 "name": "Nvme0n1", 00:14:37.279 "aliases": [ 00:14:37.279 "5e8c94b6-fe8b-40bf-acc3-6d285c67f590" 00:14:37.279 ], 00:14:37.279 "product_name": "NVMe disk", 00:14:37.279 "block_size": 4096, 00:14:37.279 "num_blocks": 38912, 00:14:37.279 "uuid": "5e8c94b6-fe8b-40bf-acc3-6d285c67f590", 00:14:37.279 "assigned_rate_limits": { 00:14:37.279 "rw_ios_per_sec": 0, 00:14:37.279 "rw_mbytes_per_sec": 0, 00:14:37.279 "r_mbytes_per_sec": 0, 00:14:37.279 "w_mbytes_per_sec": 0 00:14:37.279 }, 00:14:37.279 "claimed": false, 00:14:37.279 "zoned": false, 00:14:37.279 "supported_io_types": { 00:14:37.279 "read": true, 00:14:37.279 "write": true, 00:14:37.279 "unmap": true, 00:14:37.279 "flush": true, 00:14:37.279 "reset": true, 00:14:37.279 "nvme_admin": true, 00:14:37.279 "nvme_io": true, 00:14:37.279 "nvme_io_md": false, 00:14:37.279 "write_zeroes": true, 00:14:37.279 "zcopy": false, 00:14:37.279 "get_zone_info": false, 00:14:37.279 "zone_management": false, 00:14:37.279 "zone_append": false, 00:14:37.279 "compare": true, 00:14:37.279 "compare_and_write": true, 00:14:37.279 "abort": true, 00:14:37.279 "seek_hole": false, 00:14:37.279 "seek_data": false, 00:14:37.279 "copy": true, 00:14:37.279 "nvme_iov_md": false 00:14:37.279 }, 00:14:37.279 "memory_domains": [ 00:14:37.279 { 00:14:37.279 "dma_device_id": "system", 00:14:37.279 "dma_device_type": 1 00:14:37.279 } 00:14:37.279 ], 00:14:37.279 "driver_specific": { 00:14:37.279 "nvme": [ 00:14:37.279 { 00:14:37.279 "trid": { 00:14:37.279 "trtype": "TCP", 00:14:37.279 "adrfam": "IPv4", 00:14:37.279 "traddr": "10.0.0.2", 00:14:37.279 "trsvcid": "4420", 00:14:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:37.279 }, 00:14:37.279 "ctrlr_data": { 00:14:37.279 "cntlid": 1, 00:14:37.279 "vendor_id": "0x8086", 00:14:37.279 "model_number": "SPDK bdev Controller", 00:14:37.279 "serial_number": "SPDK0", 00:14:37.279 "firmware_revision": "24.09", 00:14:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.279 "oacs": { 00:14:37.279 "security": 0, 00:14:37.279 "format": 0, 00:14:37.279 "firmware": 0, 00:14:37.279 "ns_manage": 0 00:14:37.279 }, 00:14:37.279 "multi_ctrlr": true, 00:14:37.279 "ana_reporting": false 00:14:37.279 }, 00:14:37.279 "vs": { 00:14:37.279 "nvme_version": "1.3" 00:14:37.279 }, 00:14:37.279 "ns_data": { 00:14:37.279 "id": 1, 00:14:37.279 "can_share": true 00:14:37.279 } 00:14:37.279 } 00:14:37.279 ], 00:14:37.279 "mp_policy": "active_passive" 00:14:37.279 } 00:14:37.279 } 00:14:37.279 ] 00:14:37.279 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1522198 00:14:37.279 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:37.279 20:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.279 Running I/O for 10 seconds... 00:14:38.220 Latency(us) 00:14:38.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.220 Nvme0n1 : 1.00 18127.00 70.81 0.00 0.00 0.00 0.00 0.00 00:14:38.220 =================================================================================================================== 00:14:38.220 Total : 18127.00 70.81 0.00 0.00 0.00 0.00 0.00 00:14:38.220 00:14:39.161 20:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:39.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.421 Nvme0n1 : 2.00 18247.00 71.28 0.00 0.00 0.00 0.00 0.00 00:14:39.421 =================================================================================================================== 00:14:39.421 Total : 18247.00 71.28 0.00 0.00 0.00 0.00 0.00 00:14:39.422 00:14:39.422 true 00:14:39.422 20:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:39.422 20:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:39.422 20:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:39.422 20:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:39.422 20:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1522198 00:14:40.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.361 Nvme0n1 : 3.00 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:14:40.362 =================================================================================================================== 00:14:40.362 Total : 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:14:40.362 00:14:41.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.303 Nvme0n1 : 4.00 18311.50 71.53 0.00 0.00 0.00 0.00 0.00 00:14:41.303 =================================================================================================================== 00:14:41.303 Total : 18311.50 71.53 0.00 0.00 0.00 0.00 0.00 00:14:41.303 00:14:42.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.245 Nvme0n1 : 5.00 18335.60 71.62 0.00 0.00 0.00 0.00 0.00 00:14:42.245 =================================================================================================================== 00:14:42.245 Total : 18335.60 71.62 0.00 0.00 0.00 0.00 0.00 00:14:42.245 00:14:43.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.187 Nvme0n1 : 6.00 18351.67 71.69 0.00 0.00 0.00 0.00 0.00 00:14:43.187 =================================================================================================================== 00:14:43.187 Total : 18351.67 71.69 0.00 0.00 0.00 0.00 0.00 00:14:43.187 00:14:44.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.569 Nvme0n1 : 7.00 18370.00 71.76 0.00 0.00 0.00 0.00 0.00 00:14:44.569 =================================================================================================================== 00:14:44.569 Total : 18370.00 71.76 0.00 0.00 0.00 0.00 0.00 00:14:44.569 00:14:45.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.512 Nvme0n1 : 8.00 18377.75 71.79 0.00 0.00 0.00 0.00 0.00 00:14:45.512 =================================================================================================================== 00:14:45.512 Total : 18377.75 71.79 0.00 0.00 0.00 0.00 0.00 00:14:45.512 00:14:46.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.453 Nvme0n1 : 9.00 18387.56 71.83 0.00 0.00 0.00 0.00 0.00 00:14:46.453 =================================================================================================================== 00:14:46.453 Total : 18387.56 71.83 0.00 0.00 0.00 0.00 0.00 00:14:46.453 00:14:47.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.393 Nvme0n1 : 10.00 18396.60 71.86 0.00 0.00 0.00 0.00 0.00 00:14:47.393 =================================================================================================================== 00:14:47.393 Total : 18396.60 71.86 0.00 0.00 0.00 0.00 0.00 00:14:47.393 00:14:47.393 00:14:47.393 Latency(us) 00:14:47.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.393 Nvme0n1 : 10.01 18398.31 71.87 0.00 0.00 6953.87 2198.19 13653.33 00:14:47.393 =================================================================================================================== 00:14:47.393 Total : 18398.31 71.87 0.00 0.00 6953.87 2198.19 13653.33 00:14:47.393 0 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1521945 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1521945 ']' 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1521945 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1521945 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1521945' 00:14:47.393 killing process with pid 1521945 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1521945 00:14:47.393 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.393 00:14:47.393 Latency(us) 00:14:47.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.393 =================================================================================================================== 00:14:47.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1521945 00:14:47.393 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.654 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:47.915 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:47.915 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:48.175 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:48.175 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:48.175 20:50:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.175 [2024-07-15 20:50:51.950658] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:48.175 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:48.449 request: 00:14:48.449 { 00:14:48.449 "uuid": "930e924e-1ce0-422f-abf7-1d85d2d2cae3", 00:14:48.449 "method": "bdev_lvol_get_lvstores", 00:14:48.449 "req_id": 1 00:14:48.449 } 00:14:48.449 Got JSON-RPC error response 00:14:48.449 response: 00:14:48.449 { 00:14:48.449 "code": -19, 00:14:48.449 "message": "No such device" 00:14:48.449 } 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.449 aio_bdev 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e8c94b6-fe8b-40bf-acc3-6d285c67f590 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5e8c94b6-fe8b-40bf-acc3-6d285c67f590 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.449 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.709 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5e8c94b6-fe8b-40bf-acc3-6d285c67f590 -t 2000 00:14:48.709 [ 00:14:48.709 { 00:14:48.709 "name": "5e8c94b6-fe8b-40bf-acc3-6d285c67f590", 00:14:48.709 "aliases": [ 00:14:48.709 "lvs/lvol" 00:14:48.709 ], 00:14:48.709 "product_name": "Logical Volume", 00:14:48.709 "block_size": 4096, 00:14:48.709 "num_blocks": 38912, 00:14:48.709 "uuid": "5e8c94b6-fe8b-40bf-acc3-6d285c67f590", 00:14:48.709 "assigned_rate_limits": { 00:14:48.709 "rw_ios_per_sec": 0, 00:14:48.709 "rw_mbytes_per_sec": 0, 00:14:48.709 "r_mbytes_per_sec": 0, 00:14:48.709 "w_mbytes_per_sec": 0 00:14:48.709 }, 00:14:48.709 "claimed": false, 00:14:48.709 "zoned": false, 00:14:48.709 "supported_io_types": { 00:14:48.709 "read": true, 00:14:48.709 "write": true, 00:14:48.709 "unmap": true, 00:14:48.709 "flush": false, 00:14:48.709 "reset": true, 00:14:48.709 "nvme_admin": false, 00:14:48.709 "nvme_io": false, 00:14:48.709 "nvme_io_md": false, 00:14:48.709 "write_zeroes": true, 00:14:48.709 "zcopy": false, 00:14:48.709 "get_zone_info": false, 00:14:48.709 "zone_management": false, 00:14:48.709 "zone_append": false, 00:14:48.709 "compare": false, 00:14:48.709 "compare_and_write": false, 00:14:48.709 "abort": false, 00:14:48.709 "seek_hole": true, 00:14:48.709 "seek_data": true, 00:14:48.709 "copy": false, 00:14:48.709 "nvme_iov_md": false 00:14:48.709 }, 00:14:48.709 "driver_specific": { 00:14:48.709 "lvol": { 00:14:48.709 "lvol_store_uuid": "930e924e-1ce0-422f-abf7-1d85d2d2cae3", 00:14:48.709 "base_bdev": "aio_bdev", 00:14:48.709 "thin_provision": false, 00:14:48.709 "num_allocated_clusters": 38, 00:14:48.709 "snapshot": false, 00:14:48.709 "clone": false, 00:14:48.709 "esnap_clone": false 00:14:48.709 } 00:14:48.709 } 00:14:48.709 } 00:14:48.709 ] 00:14:48.709 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:48.709 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:48.709 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:48.969 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:48.969 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:48.969 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:49.229 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:49.229 20:50:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e8c94b6-fe8b-40bf-acc3-6d285c67f590 00:14:49.229 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 930e924e-1ce0-422f-abf7-1d85d2d2cae3 00:14:49.489 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:49.489 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.749 00:14:49.749 real 0m15.225s 00:14:49.749 user 0m14.938s 00:14:49.749 sys 0m1.251s 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:49.749 ************************************ 00:14:49.749 END TEST lvs_grow_clean 00:14:49.749 ************************************ 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:49.749 ************************************ 00:14:49.749 START TEST lvs_grow_dirty 00:14:49.749 ************************************ 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:49.749 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.010 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:50.010 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:50.010 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:14:50.010 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:14:50.010 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:50.270 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:50.270 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:50.270 20:50:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 lvol 150 00:14:50.270 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c800d118-f483-445c-9213-353cc47f61ed 00:14:50.270 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.270 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:50.530 [2024-07-15 20:50:54.295180] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:50.530 [2024-07-15 20:50:54.295231] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:50.530 true 00:14:50.530 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:14:50.530 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:50.790 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:50.790 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:50.790 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c800d118-f483-445c-9213-353cc47f61ed 00:14:51.050 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:51.050 [2024-07-15 20:50:54.937100] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.310 20:50:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1524955 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1524955 /var/tmp/bdevperf.sock 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1524955 ']' 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.310 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.310 [2024-07-15 20:50:55.151374] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:14:51.310 [2024-07-15 20:50:55.151424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524955 ] 00:14:51.310 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.570 [2024-07-15 20:50:55.225258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.570 [2024-07-15 20:50:55.278988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.207 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.207 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:52.207 20:50:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:52.484 Nvme0n1 00:14:52.484 20:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:52.744 [ 00:14:52.744 { 00:14:52.744 "name": "Nvme0n1", 00:14:52.744 "aliases": [ 00:14:52.744 "c800d118-f483-445c-9213-353cc47f61ed" 00:14:52.744 ], 00:14:52.744 "product_name": "NVMe disk", 00:14:52.744 "block_size": 4096, 00:14:52.744 "num_blocks": 38912, 00:14:52.744 "uuid": "c800d118-f483-445c-9213-353cc47f61ed", 00:14:52.744 "assigned_rate_limits": { 00:14:52.744 "rw_ios_per_sec": 0, 00:14:52.744 "rw_mbytes_per_sec": 0, 00:14:52.744 "r_mbytes_per_sec": 0, 00:14:52.744 "w_mbytes_per_sec": 0 00:14:52.744 }, 00:14:52.744 "claimed": false, 00:14:52.744 "zoned": false, 00:14:52.744 "supported_io_types": { 00:14:52.744 "read": true, 00:14:52.744 "write": true, 00:14:52.744 "unmap": true, 00:14:52.744 "flush": true, 00:14:52.744 "reset": true, 00:14:52.744 "nvme_admin": true, 00:14:52.744 "nvme_io": true, 00:14:52.744 "nvme_io_md": false, 00:14:52.744 "write_zeroes": true, 00:14:52.744 "zcopy": false, 00:14:52.744 "get_zone_info": false, 00:14:52.744 "zone_management": false, 00:14:52.744 "zone_append": false, 00:14:52.744 "compare": true, 00:14:52.744 "compare_and_write": true, 00:14:52.744 "abort": true, 00:14:52.744 "seek_hole": false, 00:14:52.744 "seek_data": false, 00:14:52.744 "copy": true, 00:14:52.744 "nvme_iov_md": false 00:14:52.744 }, 00:14:52.744 "memory_domains": [ 00:14:52.744 { 00:14:52.744 "dma_device_id": "system", 00:14:52.744 "dma_device_type": 1 00:14:52.744 } 00:14:52.744 ], 00:14:52.744 "driver_specific": { 00:14:52.744 "nvme": [ 00:14:52.744 { 00:14:52.744 "trid": { 00:14:52.744 "trtype": "TCP", 00:14:52.744 "adrfam": "IPv4", 00:14:52.744 "traddr": "10.0.0.2", 00:14:52.744 "trsvcid": "4420", 00:14:52.744 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:52.744 }, 00:14:52.744 "ctrlr_data": { 00:14:52.744 "cntlid": 1, 00:14:52.744 "vendor_id": "0x8086", 00:14:52.744 "model_number": "SPDK bdev Controller", 00:14:52.744 "serial_number": "SPDK0", 00:14:52.744 "firmware_revision": "24.09", 00:14:52.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:52.744 "oacs": { 00:14:52.744 "security": 0, 00:14:52.744 "format": 0, 00:14:52.744 "firmware": 0, 00:14:52.744 "ns_manage": 0 00:14:52.744 }, 00:14:52.744 "multi_ctrlr": true, 00:14:52.744 "ana_reporting": false 00:14:52.744 }, 00:14:52.744 "vs": { 00:14:52.744 "nvme_version": "1.3" 00:14:52.744 }, 00:14:52.744 "ns_data": { 00:14:52.744 "id": 1, 00:14:52.744 "can_share": true 00:14:52.744 } 00:14:52.744 } 00:14:52.744 ], 00:14:52.744 "mp_policy": "active_passive" 00:14:52.744 } 00:14:52.744 } 00:14:52.744 ] 00:14:52.744 20:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.744 20:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1525278 00:14:52.744 20:50:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:52.744 Running I/O for 10 seconds... 00:14:53.685 Latency(us) 00:14:53.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.686 Nvme0n1 : 1.00 17636.00 68.89 0.00 0.00 0.00 0.00 0.00 00:14:53.686 =================================================================================================================== 00:14:53.686 Total : 17636.00 68.89 0.00 0.00 0.00 0.00 0.00 00:14:53.686 00:14:54.627 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:14:54.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.888 Nvme0n1 : 2.00 17710.00 69.18 0.00 0.00 0.00 0.00 0.00 00:14:54.889 =================================================================================================================== 00:14:54.889 Total : 17710.00 69.18 0.00 0.00 0.00 0.00 0.00 00:14:54.889 00:14:54.889 true 00:14:54.889 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:14:54.889 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:54.889 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:54.889 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:54.889 20:50:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1525278 00:14:55.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.829 Nvme0n1 : 3.00 17740.00 69.30 0.00 0.00 0.00 0.00 0.00 00:14:55.829 =================================================================================================================== 00:14:55.829 Total : 17740.00 69.30 0.00 0.00 0.00 0.00 0.00 00:14:55.829 00:14:56.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.770 Nvme0n1 : 4.00 17761.00 69.38 0.00 0.00 0.00 0.00 0.00 00:14:56.770 =================================================================================================================== 00:14:56.770 Total : 17761.00 69.38 0.00 0.00 0.00 0.00 0.00 00:14:56.770 00:14:57.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.711 Nvme0n1 : 5.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:14:57.712 =================================================================================================================== 00:14:57.712 Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:14:57.712 00:14:58.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.654 Nvme0n1 : 6.00 17796.67 69.52 0.00 0.00 0.00 0.00 0.00 00:14:58.654 =================================================================================================================== 00:14:58.654 Total : 17796.67 69.52 0.00 0.00 0.00 0.00 0.00 00:14:58.654 00:15:00.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.037 Nvme0n1 : 7.00 17816.57 69.60 0.00 0.00 0.00 0.00 0.00 00:15:00.037 =================================================================================================================== 00:15:00.037 Total : 17816.57 69.60 0.00 0.00 0.00 0.00 0.00 00:15:00.037 00:15:00.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.979 Nvme0n1 : 8.00 17830.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:00.979 =================================================================================================================== 00:15:00.979 Total : 17830.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:00.979 00:15:01.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.920 Nvme0n1 : 9.00 17844.00 69.70 0.00 0.00 0.00 0.00 0.00 00:15:01.920 =================================================================================================================== 00:15:01.920 Total : 17844.00 69.70 0.00 0.00 0.00 0.00 0.00 00:15:01.920 00:15:02.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.863 Nvme0n1 : 10.00 17854.00 69.74 0.00 0.00 0.00 0.00 0.00 00:15:02.863 =================================================================================================================== 00:15:02.863 Total : 17854.00 69.74 0.00 0.00 0.00 0.00 0.00 00:15:02.863 00:15:02.863 00:15:02.863 Latency(us) 00:15:02.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.863 Nvme0n1 : 10.01 17854.34 69.74 0.00 0.00 7164.36 1802.24 9448.11 00:15:02.863 =================================================================================================================== 00:15:02.863 Total : 17854.34 69.74 0.00 0.00 7164.36 1802.24 9448.11 00:15:02.863 0 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1524955 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1524955 ']' 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1524955 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1524955 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1524955' 00:15:02.863 killing process with pid 1524955 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1524955 00:15:02.863 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.863 00:15:02.863 Latency(us) 00:15:02.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.863 =================================================================================================================== 00:15:02.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1524955 00:15:02.863 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.123 20:51:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1521470 00:15:03.384 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1521470 00:15:03.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1521470 Killed "${NVMF_APP[@]}" "$@" 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1527493 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1527493 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1527493 ']' 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.644 20:51:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:03.644 [2024-07-15 20:51:07.365227] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:03.644 [2024-07-15 20:51:07.365289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.644 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.644 [2024-07-15 20:51:07.437518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.644 [2024-07-15 20:51:07.504282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.644 [2024-07-15 20:51:07.504317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.644 [2024-07-15 20:51:07.504329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.644 [2024-07-15 20:51:07.504336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.644 [2024-07-15 20:51:07.504341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.644 [2024-07-15 20:51:07.504363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.586 [2024-07-15 20:51:08.305330] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:04.586 [2024-07-15 20:51:08.305421] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:04.586 [2024-07-15 20:51:08.305451] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c800d118-f483-445c-9213-353cc47f61ed 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c800d118-f483-445c-9213-353cc47f61ed 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.586 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.847 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c800d118-f483-445c-9213-353cc47f61ed -t 2000 00:15:04.847 [ 00:15:04.847 { 00:15:04.847 "name": "c800d118-f483-445c-9213-353cc47f61ed", 00:15:04.847 "aliases": [ 00:15:04.847 "lvs/lvol" 00:15:04.847 ], 00:15:04.847 "product_name": "Logical Volume", 00:15:04.847 "block_size": 4096, 00:15:04.847 "num_blocks": 38912, 00:15:04.847 "uuid": "c800d118-f483-445c-9213-353cc47f61ed", 00:15:04.847 "assigned_rate_limits": { 00:15:04.847 "rw_ios_per_sec": 0, 00:15:04.847 "rw_mbytes_per_sec": 0, 00:15:04.847 "r_mbytes_per_sec": 0, 00:15:04.847 "w_mbytes_per_sec": 0 00:15:04.847 }, 00:15:04.847 "claimed": false, 00:15:04.847 "zoned": false, 00:15:04.847 "supported_io_types": { 00:15:04.847 "read": true, 00:15:04.847 "write": true, 00:15:04.847 "unmap": true, 00:15:04.847 "flush": false, 00:15:04.847 "reset": true, 00:15:04.847 "nvme_admin": false, 00:15:04.847 "nvme_io": false, 00:15:04.847 "nvme_io_md": false, 00:15:04.847 "write_zeroes": true, 00:15:04.847 "zcopy": false, 00:15:04.847 "get_zone_info": false, 00:15:04.847 "zone_management": false, 00:15:04.847 "zone_append": false, 00:15:04.847 "compare": false, 00:15:04.847 "compare_and_write": false, 00:15:04.847 "abort": false, 00:15:04.847 "seek_hole": true, 00:15:04.847 "seek_data": true, 00:15:04.847 "copy": false, 00:15:04.847 "nvme_iov_md": false 00:15:04.847 }, 00:15:04.847 "driver_specific": { 00:15:04.847 "lvol": { 00:15:04.847 "lvol_store_uuid": "12b7c65f-15f9-4cc9-bc21-6b70fbf226f9", 00:15:04.847 "base_bdev": "aio_bdev", 00:15:04.847 "thin_provision": false, 00:15:04.847 "num_allocated_clusters": 38, 00:15:04.847 "snapshot": false, 00:15:04.847 "clone": false, 00:15:04.847 "esnap_clone": false 00:15:04.847 } 00:15:04.847 } 00:15:04.847 } 00:15:04.847 ] 00:15:04.847 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:04.847 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:04.847 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:05.108 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:05.108 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:05.108 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:05.108 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:05.108 20:51:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:05.367 [2024-07-15 20:51:09.073254] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.367 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:05.627 request: 00:15:05.627 { 00:15:05.627 "uuid": "12b7c65f-15f9-4cc9-bc21-6b70fbf226f9", 00:15:05.627 "method": "bdev_lvol_get_lvstores", 00:15:05.627 "req_id": 1 00:15:05.627 } 00:15:05.627 Got JSON-RPC error response 00:15:05.627 response: 00:15:05.627 { 00:15:05.627 "code": -19, 00:15:05.627 "message": "No such device" 00:15:05.627 } 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:05.627 aio_bdev 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c800d118-f483-445c-9213-353cc47f61ed 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c800d118-f483-445c-9213-353cc47f61ed 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.627 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:05.888 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c800d118-f483-445c-9213-353cc47f61ed -t 2000 00:15:05.888 [ 00:15:05.888 { 00:15:05.888 "name": "c800d118-f483-445c-9213-353cc47f61ed", 00:15:05.888 "aliases": [ 00:15:05.888 "lvs/lvol" 00:15:05.888 ], 00:15:05.888 "product_name": "Logical Volume", 00:15:05.888 "block_size": 4096, 00:15:05.888 "num_blocks": 38912, 00:15:05.888 "uuid": "c800d118-f483-445c-9213-353cc47f61ed", 00:15:05.888 "assigned_rate_limits": { 00:15:05.888 "rw_ios_per_sec": 0, 00:15:05.888 "rw_mbytes_per_sec": 0, 00:15:05.888 "r_mbytes_per_sec": 0, 00:15:05.888 "w_mbytes_per_sec": 0 00:15:05.888 }, 00:15:05.888 "claimed": false, 00:15:05.888 "zoned": false, 00:15:05.888 "supported_io_types": { 00:15:05.888 "read": true, 00:15:05.888 "write": true, 00:15:05.888 "unmap": true, 00:15:05.888 "flush": false, 00:15:05.888 "reset": true, 00:15:05.888 "nvme_admin": false, 00:15:05.888 "nvme_io": false, 00:15:05.888 "nvme_io_md": false, 00:15:05.888 "write_zeroes": true, 00:15:05.888 "zcopy": false, 00:15:05.888 "get_zone_info": false, 00:15:05.888 "zone_management": false, 00:15:05.888 "zone_append": false, 00:15:05.888 "compare": false, 00:15:05.888 "compare_and_write": false, 00:15:05.888 "abort": false, 00:15:05.888 "seek_hole": true, 00:15:05.888 "seek_data": true, 00:15:05.888 "copy": false, 00:15:05.888 "nvme_iov_md": false 00:15:05.888 }, 00:15:05.888 "driver_specific": { 00:15:05.888 "lvol": { 00:15:05.888 "lvol_store_uuid": "12b7c65f-15f9-4cc9-bc21-6b70fbf226f9", 00:15:05.888 "base_bdev": "aio_bdev", 00:15:05.888 "thin_provision": false, 00:15:05.888 "num_allocated_clusters": 38, 00:15:05.888 "snapshot": false, 00:15:05.888 "clone": false, 00:15:05.888 "esnap_clone": false 00:15:05.888 } 00:15:05.888 } 00:15:05.888 } 00:15:05.888 ] 00:15:05.888 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:05.888 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:05.888 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:06.147 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:06.147 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:06.147 20:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:06.406 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:06.406 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c800d118-f483-445c-9213-353cc47f61ed 00:15:06.406 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12b7c65f-15f9-4cc9-bc21-6b70fbf226f9 00:15:06.665 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:06.665 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:06.926 00:15:06.926 real 0m17.094s 00:15:06.926 user 0m44.289s 00:15:06.926 sys 0m3.121s 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:06.926 ************************************ 00:15:06.926 END TEST lvs_grow_dirty 00:15:06.926 ************************************ 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.926 nvmf_trace.0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.926 rmmod nvme_tcp 00:15:06.926 rmmod nvme_fabrics 00:15:06.926 rmmod nvme_keyring 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1527493 ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1527493 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1527493 ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1527493 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1527493 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1527493' 00:15:06.926 killing process with pid 1527493 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1527493 00:15:06.926 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1527493 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.211 20:51:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.167 20:51:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.167 00:15:09.167 real 0m43.207s 00:15:09.167 user 1m5.119s 00:15:09.167 sys 0m10.189s 00:15:09.167 20:51:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.167 20:51:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:09.167 ************************************ 00:15:09.167 END TEST nvmf_lvs_grow 00:15:09.167 ************************************ 00:15:09.167 20:51:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:09.167 20:51:13 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.167 20:51:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.167 20:51:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.167 20:51:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.428 ************************************ 00:15:09.428 START TEST nvmf_bdev_io_wait 00:15:09.428 ************************************ 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.428 * Looking for test storage... 00:15:09.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.428 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.429 20:51:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:17.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:17.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.589 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:17.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:17.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:15:17.590 00:15:17.590 --- 10.0.0.2 ping statistics --- 00:15:17.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.590 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:15:17.590 00:15:17.590 --- 10.0.0.1 ping statistics --- 00:15:17.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.590 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1532917 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1532917 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1532917 ']' 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.590 20:51:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 [2024-07-15 20:51:20.535711] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:17.590 [2024-07-15 20:51:20.535776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.590 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.590 [2024-07-15 20:51:20.612271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.590 [2024-07-15 20:51:20.689228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.590 [2024-07-15 20:51:20.689269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.590 [2024-07-15 20:51:20.689277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.590 [2024-07-15 20:51:20.689284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.590 [2024-07-15 20:51:20.689289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.590 [2024-07-15 20:51:20.689337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.590 [2024-07-15 20:51:20.689424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.590 [2024-07-15 20:51:20.689581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.590 [2024-07-15 20:51:20.689582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 [2024-07-15 20:51:21.421515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 Malloc0 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.590 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.851 [2024-07-15 20:51:21.493430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1533121 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1533123 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.851 { 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme$subsystem", 00:15:17.851 "trtype": "$TEST_TRANSPORT", 00:15:17.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "$NVMF_PORT", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.851 "hdgst": ${hdgst:-false}, 00:15:17.851 "ddgst": ${ddgst:-false} 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 } 00:15:17.851 EOF 00:15:17.851 )") 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1533125 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1533128 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.851 { 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme$subsystem", 00:15:17.851 "trtype": "$TEST_TRANSPORT", 00:15:17.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "$NVMF_PORT", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.851 "hdgst": ${hdgst:-false}, 00:15:17.851 "ddgst": ${ddgst:-false} 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 } 00:15:17.851 EOF 00:15:17.851 )") 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.851 { 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme$subsystem", 00:15:17.851 "trtype": "$TEST_TRANSPORT", 00:15:17.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "$NVMF_PORT", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.851 "hdgst": ${hdgst:-false}, 00:15:17.851 "ddgst": ${ddgst:-false} 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 } 00:15:17.851 EOF 00:15:17.851 )") 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.851 { 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme$subsystem", 00:15:17.851 "trtype": "$TEST_TRANSPORT", 00:15:17.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "$NVMF_PORT", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.851 "hdgst": ${hdgst:-false}, 00:15:17.851 "ddgst": ${ddgst:-false} 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 } 00:15:17.851 EOF 00:15:17.851 )") 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1533121 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme1", 00:15:17.851 "trtype": "tcp", 00:15:17.851 "traddr": "10.0.0.2", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "4420", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.851 "hdgst": false, 00:15:17.851 "ddgst": false 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 }' 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme1", 00:15:17.851 "trtype": "tcp", 00:15:17.851 "traddr": "10.0.0.2", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "4420", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.851 "hdgst": false, 00:15:17.851 "ddgst": false 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 }' 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme1", 00:15:17.851 "trtype": "tcp", 00:15:17.851 "traddr": "10.0.0.2", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "4420", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.851 "hdgst": false, 00:15:17.851 "ddgst": false 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 }' 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:17.851 20:51:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.851 "params": { 00:15:17.851 "name": "Nvme1", 00:15:17.851 "trtype": "tcp", 00:15:17.851 "traddr": "10.0.0.2", 00:15:17.851 "adrfam": "ipv4", 00:15:17.851 "trsvcid": "4420", 00:15:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.851 "hdgst": false, 00:15:17.851 "ddgst": false 00:15:17.851 }, 00:15:17.851 "method": "bdev_nvme_attach_controller" 00:15:17.851 }' 00:15:17.851 [2024-07-15 20:51:21.546204] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:17.851 [2024-07-15 20:51:21.546257] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:17.851 [2024-07-15 20:51:21.548800] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:17.851 [2024-07-15 20:51:21.548847] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:17.851 [2024-07-15 20:51:21.549926] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:17.851 [2024-07-15 20:51:21.549970] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:17.851 [2024-07-15 20:51:21.552050] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:17.851 [2024-07-15 20:51:21.552095] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:17.851 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.852 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.852 [2024-07-15 20:51:21.690983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.852 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.852 [2024-07-15 20:51:21.741891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:18.111 [2024-07-15 20:51:21.745022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.111 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.111 [2024-07-15 20:51:21.794150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.111 [2024-07-15 20:51:21.796292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:18.111 [2024-07-15 20:51:21.840743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.111 [2024-07-15 20:51:21.844085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:18.111 [2024-07-15 20:51:21.890999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:18.111 Running I/O for 1 seconds... 00:15:18.111 Running I/O for 1 seconds... 00:15:18.371 Running I/O for 1 seconds... 00:15:18.371 Running I/O for 1 seconds... 00:15:19.313 00:15:19.313 Latency(us) 00:15:19.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.313 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:19.313 Nvme1n1 : 1.01 11094.00 43.34 0.00 0.00 11459.86 6089.39 19551.57 00:15:19.313 =================================================================================================================== 00:15:19.313 Total : 11094.00 43.34 0.00 0.00 11459.86 6089.39 19551.57 00:15:19.313 00:15:19.313 Latency(us) 00:15:19.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.313 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:19.313 Nvme1n1 : 1.01 14559.87 56.87 0.00 0.00 8760.46 5925.55 19333.12 00:15:19.313 =================================================================================================================== 00:15:19.313 Total : 14559.87 56.87 0.00 0.00 8760.46 5925.55 19333.12 00:15:19.313 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1533123 00:15:19.313 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1533125 00:15:19.313 00:15:19.313 Latency(us) 00:15:19.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.313 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:19.313 Nvme1n1 : 1.00 12004.51 46.89 0.00 0.00 10639.11 3372.37 29928.11 00:15:19.313 =================================================================================================================== 00:15:19.313 Total : 12004.51 46.89 0.00 0.00 10639.11 3372.37 29928.11 00:15:19.313 00:15:19.313 Latency(us) 00:15:19.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.313 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:19.313 Nvme1n1 : 1.00 159342.80 622.43 0.00 0.00 800.07 276.48 948.91 00:15:19.313 =================================================================================================================== 00:15:19.313 Total : 159342.80 622.43 0.00 0.00 800.07 276.48 948.91 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1533128 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.575 rmmod nvme_tcp 00:15:19.575 rmmod nvme_fabrics 00:15:19.575 rmmod nvme_keyring 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1532917 ']' 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1532917 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1532917 ']' 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1532917 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1532917 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1532917' 00:15:19.575 killing process with pid 1532917 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1532917 00:15:19.575 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1532917 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.837 20:51:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.750 20:51:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.750 00:15:21.750 real 0m12.522s 00:15:21.750 user 0m18.721s 00:15:21.750 sys 0m6.749s 00:15:21.750 20:51:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.750 20:51:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:21.750 ************************************ 00:15:21.750 END TEST nvmf_bdev_io_wait 00:15:21.750 ************************************ 00:15:22.011 20:51:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:22.011 20:51:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:22.011 20:51:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.011 20:51:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.011 20:51:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.011 ************************************ 00:15:22.011 START TEST nvmf_queue_depth 00:15:22.011 ************************************ 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:22.011 * Looking for test storage... 00:15:22.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.011 20:51:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.012 20:51:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:30.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:30.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.154 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:30.155 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:30.155 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:15:30.155 00:15:30.155 --- 10.0.0.2 ping statistics --- 00:15:30.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.155 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:15:30.155 00:15:30.155 --- 10.0.0.1 ping statistics --- 00:15:30.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.155 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1537634 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1537634 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1537634 ']' 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.155 20:51:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 [2024-07-15 20:51:32.982169] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:30.155 [2024-07-15 20:51:32.982234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.155 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.155 [2024-07-15 20:51:33.069656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.155 [2024-07-15 20:51:33.161647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.155 [2024-07-15 20:51:33.161701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.155 [2024-07-15 20:51:33.161709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.155 [2024-07-15 20:51:33.161716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.155 [2024-07-15 20:51:33.161722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.155 [2024-07-15 20:51:33.161747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 [2024-07-15 20:51:33.816849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 Malloc0 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 [2024-07-15 20:51:33.890569] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1537870 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1537870 /var/tmp/bdevperf.sock 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1537870 ']' 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.155 20:51:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.155 [2024-07-15 20:51:33.945624] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:15:30.155 [2024-07-15 20:51:33.945678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537870 ] 00:15:30.155 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.155 [2024-07-15 20:51:34.007086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.415 [2024-07-15 20:51:34.078151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.986 20:51:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.986 20:51:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:30.986 20:51:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.987 20:51:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.987 20:51:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:30.987 NVMe0n1 00:15:30.987 20:51:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.987 20:51:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.246 Running I/O for 10 seconds... 00:15:41.283 00:15:41.283 Latency(us) 00:15:41.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.283 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:41.283 Verification LBA range: start 0x0 length 0x4000 00:15:41.283 NVMe0n1 : 10.06 11358.16 44.37 0.00 0.00 89808.04 23702.19 68157.44 00:15:41.283 =================================================================================================================== 00:15:41.283 Total : 11358.16 44.37 0.00 0.00 89808.04 23702.19 68157.44 00:15:41.283 0 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1537870 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1537870 ']' 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1537870 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537870 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537870' 00:15:41.283 killing process with pid 1537870 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1537870 00:15:41.283 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.283 00:15:41.283 Latency(us) 00:15:41.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.283 =================================================================================================================== 00:15:41.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.283 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1537870 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.542 rmmod nvme_tcp 00:15:41.542 rmmod nvme_fabrics 00:15:41.542 rmmod nvme_keyring 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1537634 ']' 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1537634 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1537634 ']' 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1537634 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537634 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537634' 00:15:41.542 killing process with pid 1537634 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1537634 00:15:41.542 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1537634 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.802 20:51:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.712 20:51:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.712 00:15:43.712 real 0m21.827s 00:15:43.712 user 0m25.473s 00:15:43.712 sys 0m6.437s 00:15:43.712 20:51:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.712 20:51:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:43.712 ************************************ 00:15:43.712 END TEST nvmf_queue_depth 00:15:43.712 ************************************ 00:15:43.712 20:51:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.713 20:51:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:43.713 20:51:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.713 20:51:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.713 20:51:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.713 ************************************ 00:15:43.713 START TEST nvmf_target_multipath 00:15:43.713 ************************************ 00:15:43.713 20:51:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:43.973 * Looking for test storage... 00:15:43.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.973 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.974 20:51:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.556 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:50.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:50.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:50.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:50.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.557 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:15:50.818 00:15:50.818 --- 10.0.0.2 ping statistics --- 00:15:50.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.818 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:15:50.818 00:15:50.818 --- 10.0.0.1 ping statistics --- 00:15:50.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.818 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.818 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:51.079 only one NIC for nvmf test 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.079 rmmod nvme_tcp 00:15:51.079 rmmod nvme_fabrics 00:15:51.079 rmmod nvme_keyring 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.079 20:51:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.993 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.255 00:15:53.255 real 0m9.305s 00:15:53.255 user 0m2.000s 00:15:53.255 sys 0m5.203s 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.255 20:51:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:53.255 ************************************ 00:15:53.255 END TEST nvmf_target_multipath 00:15:53.255 ************************************ 00:15:53.255 20:51:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.255 20:51:56 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:53.255 20:51:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.255 20:51:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.255 20:51:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.255 ************************************ 00:15:53.255 START TEST nvmf_zcopy 00:15:53.255 ************************************ 00:15:53.255 20:51:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:53.255 * Looking for test storage... 00:15:53.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.255 20:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.256 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.256 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.256 20:51:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.256 20:51:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:01.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:01.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.400 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:01.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:01.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.401 20:52:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:16:01.401 00:16:01.401 --- 10.0.0.2 ping statistics --- 00:16:01.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.401 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:16:01.401 00:16:01.401 --- 10.0.0.1 ping statistics --- 00:16:01.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.401 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1548299 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1548299 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1548299 ']' 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.401 20:52:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 [2024-07-15 20:52:04.282161] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:16:01.401 [2024-07-15 20:52:04.282209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.401 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.401 [2024-07-15 20:52:04.363689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.401 [2024-07-15 20:52:04.438267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.401 [2024-07-15 20:52:04.438321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.401 [2024-07-15 20:52:04.438329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.401 [2024-07-15 20:52:04.438336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.401 [2024-07-15 20:52:04.438341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.401 [2024-07-15 20:52:04.438374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 [2024-07-15 20:52:05.110301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 [2024-07-15 20:52:05.126544] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 malloc0 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:01.401 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:01.402 { 00:16:01.402 "params": { 00:16:01.402 "name": "Nvme$subsystem", 00:16:01.402 "trtype": "$TEST_TRANSPORT", 00:16:01.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:01.402 "adrfam": "ipv4", 00:16:01.402 "trsvcid": "$NVMF_PORT", 00:16:01.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:01.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:01.402 "hdgst": ${hdgst:-false}, 00:16:01.402 "ddgst": ${ddgst:-false} 00:16:01.402 }, 00:16:01.402 "method": "bdev_nvme_attach_controller" 00:16:01.402 } 00:16:01.402 EOF 00:16:01.402 )") 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:01.402 20:52:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:01.402 "params": { 00:16:01.402 "name": "Nvme1", 00:16:01.402 "trtype": "tcp", 00:16:01.402 "traddr": "10.0.0.2", 00:16:01.402 "adrfam": "ipv4", 00:16:01.402 "trsvcid": "4420", 00:16:01.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.402 "hdgst": false, 00:16:01.402 "ddgst": false 00:16:01.402 }, 00:16:01.402 "method": "bdev_nvme_attach_controller" 00:16:01.402 }' 00:16:01.402 [2024-07-15 20:52:05.213010] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:16:01.402 [2024-07-15 20:52:05.213072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548384 ] 00:16:01.402 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.402 [2024-07-15 20:52:05.276266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.663 [2024-07-15 20:52:05.350639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.663 Running I/O for 10 seconds... 00:16:11.697 00:16:11.697 Latency(us) 00:16:11.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.697 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:11.697 Verification LBA range: start 0x0 length 0x1000 00:16:11.697 Nvme1n1 : 10.01 9060.14 70.78 0.00 0.00 14074.84 2484.91 37792.43 00:16:11.697 =================================================================================================================== 00:16:11.697 Total : 9060.14 70.78 0.00 0.00 14074.84 2484.91 37792.43 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1550458 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:11.958 [2024-07-15 20:52:15.665024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.665052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:11.958 { 00:16:11.958 "params": { 00:16:11.958 "name": "Nvme$subsystem", 00:16:11.958 "trtype": "$TEST_TRANSPORT", 00:16:11.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.958 "adrfam": "ipv4", 00:16:11.958 "trsvcid": "$NVMF_PORT", 00:16:11.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.958 "hdgst": ${hdgst:-false}, 00:16:11.958 "ddgst": ${ddgst:-false} 00:16:11.958 }, 00:16:11.958 "method": "bdev_nvme_attach_controller" 00:16:11.958 } 00:16:11.958 EOF 00:16:11.958 )") 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:11.958 [2024-07-15 20:52:15.673009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.673017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:11.958 20:52:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:11.958 "params": { 00:16:11.958 "name": "Nvme1", 00:16:11.958 "trtype": "tcp", 00:16:11.958 "traddr": "10.0.0.2", 00:16:11.958 "adrfam": "ipv4", 00:16:11.958 "trsvcid": "4420", 00:16:11.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.958 "hdgst": false, 00:16:11.958 "ddgst": false 00:16:11.958 }, 00:16:11.958 "method": "bdev_nvme_attach_controller" 00:16:11.958 }' 00:16:11.958 [2024-07-15 20:52:15.681027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.681034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.689048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.689056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.697068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.697076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.705089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.705097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.710839] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:16:11.958 [2024-07-15 20:52:15.710885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550458 ] 00:16:11.958 [2024-07-15 20:52:15.713110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.713118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.721134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.721141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 [2024-07-15 20:52:15.729155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.729162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.958 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.958 [2024-07-15 20:52:15.737174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.958 [2024-07-15 20:52:15.737181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.745194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.745201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.753214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.753221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.761235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.761242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.768448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.959 [2024-07-15 20:52:15.769255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.769262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.777276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.777283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.785295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.785302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.793316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.793324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.801338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.801348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.809360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.809370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.817379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.817387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.825401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.825408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.833348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.959 [2024-07-15 20:52:15.833421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.833428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.841442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.841450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.959 [2024-07-15 20:52:15.849469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.959 [2024-07-15 20:52:15.849482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.857487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.857497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.865505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.865513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.873524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.873531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.881545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.881553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.889564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.889572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.897583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.897590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.905604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.905611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.913634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.913648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.921648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.921657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.929668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.929677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.937688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.937698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.945708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.945717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.953727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.953734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.961749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.961756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.969770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.969777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.977792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.977799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.985813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.985820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:15.993834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:15.993844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.001853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.001864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.009873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.009880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.017894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.017902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.025916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.025923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.033938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.033947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.041957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.041964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.049979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.049986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.057999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.058006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.066020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.066027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.074042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.074049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.082064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.082071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.090092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.090108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 [2024-07-15 20:52:16.098106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.098113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.221 Running I/O for 5 seconds... 00:16:12.221 [2024-07-15 20:52:16.106128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.221 [2024-07-15 20:52:16.106134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.117106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.117126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.125927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.125942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.134973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.134987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.143728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.143743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.152672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.152686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.161542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.161560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.174929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.174945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.188434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.188448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.201298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.201312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.214950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.214965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.228272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.228287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.240736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.240751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.482 [2024-07-15 20:52:16.253865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.482 [2024-07-15 20:52:16.253879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.266934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.266949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.280017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.280032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.292551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.292566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.305421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.305436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.318868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.318883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.332269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.332284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.344781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.344795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.357841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.357856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.483 [2024-07-15 20:52:16.370606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.483 [2024-07-15 20:52:16.370621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.383758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.383772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.397374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.397388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.410580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.410601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.423015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.423030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.435851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.435864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.449185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.449200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.462449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.462464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.475519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.475533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.488936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.488951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.501260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.501275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.514285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.514299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.527421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.527435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.540736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.540751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.553600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.553615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.567135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.567150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.580104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.580118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.593424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.593438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.606528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.606543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.619612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.619626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.744 [2024-07-15 20:52:16.632578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.744 [2024-07-15 20:52:16.632592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.004 [2024-07-15 20:52:16.645661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.004 [2024-07-15 20:52:16.645677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.658793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.658808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.672189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.672204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.685355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.685370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.698727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.698741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.711551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.711565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.724892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.724906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.738409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.738423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.751784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.751798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.764941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.764955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.777557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.777572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.790721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.790736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.803483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.803497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.816356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.816370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.829266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.829280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.842806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.842820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.855848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.855862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.869084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.869098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.882363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.882377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.005 [2024-07-15 20:52:16.895111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.005 [2024-07-15 20:52:16.895130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.907620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.907635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.920619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.920634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.933262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.933276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.946185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.946199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.959294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.959308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.972308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.972322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.985534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.985548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:16.998348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:16.998362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:17.011429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.265 [2024-07-15 20:52:17.011444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.265 [2024-07-15 20:52:17.024696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.024710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.038078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.038093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.051160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.051175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.063468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.063483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.077083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.077097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.089469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.089484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.102884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.102899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.116332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.116347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.129372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.129387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.142428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.142443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.266 [2024-07-15 20:52:17.156036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.266 [2024-07-15 20:52:17.156051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.168884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.168899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.182288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.182302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.195345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.195360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.208636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.208651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.221847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.221861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.234860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.234874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.246853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.246868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.260030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.260044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.273472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.273486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.286647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.286661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.299874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.299888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.313075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.313089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.326194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.326209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.339331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.339345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.352739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.352753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.365991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.366005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.379190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.379205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.392394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.392409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.527 [2024-07-15 20:52:17.405754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.527 [2024-07-15 20:52:17.405769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.419228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.419242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.432540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.432555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.445605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.445619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.458783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.458797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.471721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.471735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.484447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.484461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.496827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.496842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.509252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.509267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.522095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.522110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.534838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.534852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.547810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.547824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.560956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.560970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.574433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.574448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.587830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.587845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.600947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.600961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.613666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.613681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.626231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.626246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.639056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.639075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.651470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.651484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.664174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.664188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.789 [2024-07-15 20:52:17.677283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.789 [2024-07-15 20:52:17.677299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.690700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.690715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.703952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.703966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.716701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.716716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.729848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.729863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.742860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.742874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.756415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.756430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.769421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.769435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.782442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.782456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.795672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.795688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.809371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.809385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.822422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.822437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.835787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.835802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.849112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.849132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.861986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.862000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.874940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.874954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.888297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.888315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.901374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.901388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.914166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.914180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.927206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.927220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.051 [2024-07-15 20:52:17.940549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.051 [2024-07-15 20:52:17.940563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:17.953788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:17.953802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:17.967151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:17.967166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:17.980618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:17.980633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:17.993834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:17.993848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.006749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.006763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.020380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.020395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.032769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.032783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.046069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.046083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.059279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.059293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.311 [2024-07-15 20:52:18.072696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.311 [2024-07-15 20:52:18.072710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.084975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.084989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.098149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.098163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.111667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.111682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.124792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.124806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.137854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.137872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.150937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.150952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.164456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.164471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.176670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.176684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.189989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.190003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.312 [2024-07-15 20:52:18.203217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.312 [2024-07-15 20:52:18.203231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.571 [2024-07-15 20:52:18.216507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.571 [2024-07-15 20:52:18.216521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.571 [2024-07-15 20:52:18.229649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.571 [2024-07-15 20:52:18.229663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.571 [2024-07-15 20:52:18.242928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.571 [2024-07-15 20:52:18.242942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.256304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.256318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.269614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.269628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.283032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.283046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.296427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.296441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.309207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.309221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.322698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.322713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.335717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.335732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.348921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.348936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.361648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.361662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.374791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.374806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.387951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.387969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.400273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.400287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.413593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.413607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.426829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.426843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.440000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.440015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.572 [2024-07-15 20:52:18.452388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.572 [2024-07-15 20:52:18.452402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.465515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.465529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.478891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.478905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.491731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.491745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.504702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.504716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.517615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.517629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.530310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.530324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.543862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.543876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.556497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.556511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.569390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.569404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.582254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.582269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.594754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.594768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.608335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.608350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.621471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.621485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.634837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.634851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.647677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.647692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.660748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.660762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.674164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.674178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.687515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.687529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.700253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.700267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.831 [2024-07-15 20:52:18.712900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.831 [2024-07-15 20:52:18.712915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.725263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.725277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.738501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.738515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.751347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.751362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.764329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.764343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.777515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.777528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.790816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.790830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.803189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.803203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.816027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.816041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.828908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.828923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.842535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.842550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.855105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.855119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.867946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.867960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.880946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.880960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.894235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.894250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.907357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.907372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.920619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.920633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.933921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.933936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.946935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.946949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.960120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.960139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.090 [2024-07-15 20:52:18.973573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.090 [2024-07-15 20:52:18.973588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:18.986759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:18.986774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:18.999358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:18.999372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.011662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.011677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.024354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.024370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.037489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.037504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.049703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.049717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.063251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.063267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.076865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.076880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.089959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.089974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.103225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.103240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.116593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.116607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.129862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.129877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.143075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.143089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.156014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.156029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.169440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.169454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.182048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.182063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.195368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.195383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.208768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.208783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.221752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.221766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.350 [2024-07-15 20:52:19.234966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.350 [2024-07-15 20:52:19.234981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.248203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.248218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.261068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.261083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.273995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.274010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.286156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.286171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.298628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.298643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.312184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.312198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.324701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.324715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.337699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.337714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.350740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.350754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.364405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.364419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.377825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.377839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.391189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.391203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.404033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.404048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.417241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.417255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.430409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.430424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.443772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.443787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.457301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.457315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.469912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.469926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.483164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.483179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.610 [2024-07-15 20:52:19.496501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.610 [2024-07-15 20:52:19.496516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.509176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.509191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.522513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.522527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.536013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.536027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.549304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.549319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.562486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.562500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.576144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.576158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.589325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.589339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.601908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.601922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.615232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.615250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.628175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.628189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.640931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.640944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.654160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.654174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.667371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.667385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.680441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.680455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.693700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.693715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.706989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.707003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.720426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.720440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.733643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.733657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.746572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.746586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.870 [2024-07-15 20:52:19.759500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.870 [2024-07-15 20:52:19.759515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.772549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.772564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.785537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.785550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.797965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.797980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.810979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.810993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.824149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.824163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.837482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.837496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.850460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.850475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.863996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.864015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.877256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.877270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.890192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.890206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.903328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.903342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.916565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.916579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.929581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.929595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.942340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.942354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.955001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.955015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.967636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.967650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.980986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.981000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:19.993956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:19.993971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:20.007448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:20.007465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.130 [2024-07-15 20:52:20.020209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.130 [2024-07-15 20:52:20.020225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.033876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.033937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.046522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.046537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.059791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.059805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.073402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.073417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.086879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.086894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.100173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.100188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.113329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.113350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.126290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.126305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.139754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.139769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.153333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.153347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.165777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.165791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.179149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.179164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.191690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.191705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.204860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.204874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.218125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.218139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.231031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.231046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.243410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.243424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.256591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.256605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.390 [2024-07-15 20:52:20.269872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.390 [2024-07-15 20:52:20.269887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.283316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.283330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.296443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.296457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.309603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.309618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.322468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.322482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.335699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.335713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.349017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.349032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.362397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.362415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.375622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.375636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.388904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.388918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.401516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.401530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.414759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.414774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.427844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.427859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.440986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.441000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.453955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.453970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.466915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.466930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.480159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.480173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.493221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.493236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.505606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.505621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.518707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.518721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.650 [2024-07-15 20:52:20.531853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.650 [2024-07-15 20:52:20.531867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.545415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.545430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.558091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.558106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.571549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.571563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.584633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.584648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.597576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.597591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.610884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.610899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.624140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.624154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.637185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.637199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.650455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.650470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.663478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.663491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.676562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.676577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.690043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.690058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.702574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.702588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.715576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.715590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.728183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.728198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.740987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.741001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.753488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.753502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.765787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.765801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.778553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.778568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.911 [2024-07-15 20:52:20.790618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.911 [2024-07-15 20:52:20.790632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.803232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.803247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.816527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.816541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.829409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.829424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.842100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.842115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.855172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.855187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.868076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.868090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.880366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.880381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.893345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.893359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.906357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.906372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.919585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.919600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.932345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.932359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.945414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.945429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.958086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.958101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.971469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.971483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.984592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.984606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:20.998086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:20.998100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:21.010672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:21.010687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:21.024043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:21.024058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:21.036899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:21.036914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:21.050264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:21.050279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.172 [2024-07-15 20:52:21.063173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.172 [2024-07-15 20:52:21.063187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.076672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.076687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.090021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.090036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.103342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.103356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.115726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.115741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 00:16:17.432 Latency(us) 00:16:17.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.432 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:17.432 Nvme1n1 : 5.01 19470.14 152.11 0.00 0.00 6566.56 2853.55 16602.45 00:16:17.432 =================================================================================================================== 00:16:17.432 Total : 19470.14 152.11 0.00 0.00 6566.56 2853.55 16602.45 00:16:17.432 [2024-07-15 20:52:21.125329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.125344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.137365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.137378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.149395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.149405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.161425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.161438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.173449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.173459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.185481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.185490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.197511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.197519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.209545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.209555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.221572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.221580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.233606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.233617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 [2024-07-15 20:52:21.253654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.432 [2024-07-15 20:52:21.253662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1550458) - No such process 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1550458 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.432 delay0 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.432 20:52:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:17.692 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.692 [2024-07-15 20:52:21.398318] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:24.274 Initializing NVMe Controllers 00:16:24.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:24.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:24.274 Initialization complete. Launching workers. 00:16:24.274 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 109 00:16:24.275 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 33 00:16:24.275 success 211, unsuccess 185, failed 0 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.275 rmmod nvme_tcp 00:16:24.275 rmmod nvme_fabrics 00:16:24.275 rmmod nvme_keyring 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1548299 ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1548299 ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1548299' 00:16:24.275 killing process with pid 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1548299 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.275 20:52:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.189 20:52:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.189 00:16:26.189 real 0m32.974s 00:16:26.189 user 0m44.677s 00:16:26.189 sys 0m10.081s 00:16:26.189 20:52:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.189 20:52:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.189 ************************************ 00:16:26.189 END TEST nvmf_zcopy 00:16:26.189 ************************************ 00:16:26.189 20:52:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.189 20:52:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:26.189 20:52:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.189 20:52:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.189 20:52:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.189 ************************************ 00:16:26.189 START TEST nvmf_nmic 00:16:26.189 ************************************ 00:16:26.189 20:52:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:26.451 * Looking for test storage... 00:16:26.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.451 20:52:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.452 20:52:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:33.041 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:33.041 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:33.041 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:33.041 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.041 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.042 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.042 20:52:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.302 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:16:33.562 00:16:33.562 --- 10.0.0.2 ping statistics --- 00:16:33.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.562 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:16:33.562 00:16:33.562 --- 10.0.0.1 ping statistics --- 00:16:33.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.562 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1556999 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1556999 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1556999 ']' 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.562 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.563 20:52:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:33.563 [2024-07-15 20:52:37.324469] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:16:33.563 [2024-07-15 20:52:37.324533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.563 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.563 [2024-07-15 20:52:37.390900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.827 [2024-07-15 20:52:37.457017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.827 [2024-07-15 20:52:37.457058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.827 [2024-07-15 20:52:37.457065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.827 [2024-07-15 20:52:37.457071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.827 [2024-07-15 20:52:37.457077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.827 [2024-07-15 20:52:37.457222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.827 [2024-07-15 20:52:37.457401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.827 [2024-07-15 20:52:37.457561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.827 [2024-07-15 20:52:37.457563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 [2024-07-15 20:52:38.134809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 Malloc0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 [2024-07-15 20:52:38.194194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:34.476 test case1: single bdev can't be used in multiple subsystems 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 [2024-07-15 20:52:38.230095] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:34.476 [2024-07-15 20:52:38.230113] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:34.476 [2024-07-15 20:52:38.230120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.476 request: 00:16:34.476 { 00:16:34.476 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:34.476 "namespace": { 00:16:34.476 "bdev_name": "Malloc0", 00:16:34.476 "no_auto_visible": false 00:16:34.476 }, 00:16:34.476 "method": "nvmf_subsystem_add_ns", 00:16:34.476 "req_id": 1 00:16:34.476 } 00:16:34.476 Got JSON-RPC error response 00:16:34.476 response: 00:16:34.476 { 00:16:34.476 "code": -32602, 00:16:34.476 "message": "Invalid parameters" 00:16:34.476 } 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:34.476 Adding namespace failed - expected result. 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:34.476 test case2: host connect to nvmf target in multiple paths 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.476 [2024-07-15 20:52:38.242236] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.476 20:52:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.389 20:52:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:37.824 20:52:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.824 20:52:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:37.824 20:52:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.824 20:52:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:37.824 20:52:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:39.761 20:52:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:39.761 [global] 00:16:39.761 thread=1 00:16:39.761 invalidate=1 00:16:39.761 rw=write 00:16:39.761 time_based=1 00:16:39.761 runtime=1 00:16:39.761 ioengine=libaio 00:16:39.761 direct=1 00:16:39.761 bs=4096 00:16:39.761 iodepth=1 00:16:39.761 norandommap=0 00:16:39.761 numjobs=1 00:16:39.761 00:16:39.761 verify_dump=1 00:16:39.761 verify_backlog=512 00:16:39.761 verify_state_save=0 00:16:39.761 do_verify=1 00:16:39.761 verify=crc32c-intel 00:16:39.761 [job0] 00:16:39.761 filename=/dev/nvme0n1 00:16:39.761 Could not set queue depth (nvme0n1) 00:16:40.025 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.025 fio-3.35 00:16:40.025 Starting 1 thread 00:16:41.411 00:16:41.411 job0: (groupid=0, jobs=1): err= 0: pid=1558447: Mon Jul 15 20:52:44 2024 00:16:41.411 read: IOPS=15, BW=63.5KiB/s (65.0kB/s)(64.0KiB/1008msec) 00:16:41.411 slat (nsec): min=25037, max=26225, avg=25526.31, stdev=283.57 00:16:41.411 clat (usec): min=41009, max=41993, avg=41761.86, stdev=373.88 00:16:41.411 lat (usec): min=41035, max=42019, avg=41787.39, stdev=373.92 00:16:41.411 clat percentiles (usec): 00:16:41.411 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:41.411 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:41.411 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:41.411 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:41.411 | 99.99th=[42206] 00:16:41.411 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:41.411 slat (nsec): min=9609, max=71008, avg=28373.74, stdev=9161.54 00:16:41.411 clat (usec): min=387, max=1297, avg=628.20, stdev=64.66 00:16:41.411 lat (usec): min=419, max=1330, avg=656.58, stdev=67.30 00:16:41.411 clat percentiles (usec): 00:16:41.411 | 1.00th=[ 420], 5.00th=[ 523], 10.00th=[ 537], 20.00th=[ 578], 00:16:41.411 | 30.00th=[ 619], 40.00th=[ 627], 50.00th=[ 635], 60.00th=[ 652], 00:16:41.411 | 70.00th=[ 668], 80.00th=[ 676], 90.00th=[ 685], 95.00th=[ 693], 00:16:41.411 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 1303], 99.95th=[ 1303], 00:16:41.411 | 99.99th=[ 1303] 00:16:41.411 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:41.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:41.411 lat (usec) : 500=1.52%, 750=94.70%, 1000=0.57% 00:16:41.411 lat (msec) : 2=0.19%, 50=3.03% 00:16:41.411 cpu : usr=0.60%, sys=1.59%, ctx=529, majf=0, minf=1 00:16:41.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.411 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.411 00:16:41.411 Run status group 0 (all jobs): 00:16:41.411 READ: bw=63.5KiB/s (65.0kB/s), 63.5KiB/s-63.5KiB/s (65.0kB/s-65.0kB/s), io=64.0KiB (65.5kB), run=1008-1008msec 00:16:41.411 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:16:41.411 00:16:41.411 Disk stats (read/write): 00:16:41.411 nvme0n1: ios=63/512, merge=0/0, ticks=618/321, in_queue=939, util=94.39% 00:16:41.411 20:52:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.411 rmmod nvme_tcp 00:16:41.411 rmmod nvme_fabrics 00:16:41.411 rmmod nvme_keyring 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1556999 ']' 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1556999 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1556999 ']' 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1556999 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556999 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556999' 00:16:41.411 killing process with pid 1556999 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1556999 00:16:41.411 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1556999 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.672 20:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.586 20:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.586 00:16:43.586 real 0m17.387s 00:16:43.586 user 0m48.490s 00:16:43.586 sys 0m5.987s 00:16:43.586 20:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.586 20:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:43.586 ************************************ 00:16:43.586 END TEST nvmf_nmic 00:16:43.586 ************************************ 00:16:43.586 20:52:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.586 20:52:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:43.586 20:52:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.586 20:52:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.586 20:52:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.848 ************************************ 00:16:43.848 START TEST nvmf_fio_target 00:16:43.848 ************************************ 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:43.848 * Looking for test storage... 00:16:43.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.848 20:52:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.849 20:52:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:50.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:50.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.438 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:50.439 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:50.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.439 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:16:50.700 00:16:50.700 --- 10.0.0.2 ping statistics --- 00:16:50.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.700 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:16:50.700 00:16:50.700 --- 10.0.0.1 ping statistics --- 00:16:50.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.700 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1562877 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1562877 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1562877 ']' 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.700 20:52:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.961 [2024-07-15 20:52:54.622321] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:16:50.961 [2024-07-15 20:52:54.622380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.961 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.961 [2024-07-15 20:52:54.689270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.961 [2024-07-15 20:52:54.755657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.961 [2024-07-15 20:52:54.755691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.961 [2024-07-15 20:52:54.755699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.961 [2024-07-15 20:52:54.755705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.961 [2024-07-15 20:52:54.755714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.961 [2024-07-15 20:52:54.755847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.961 [2024-07-15 20:52:54.755966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.961 [2024-07-15 20:52:54.756127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.961 [2024-07-15 20:52:54.756150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.532 20:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.532 20:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:51.532 20:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.532 20:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.532 20:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.793 20:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.793 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:51.793 [2024-07-15 20:52:55.593273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.793 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.053 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:52.053 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.314 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:52.314 20:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.314 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:52.314 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.574 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:52.574 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:52.835 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.835 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:52.835 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:53.095 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:53.095 20:52:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:53.355 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:53.355 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:53.355 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:53.615 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:53.615 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.875 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:53.875 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.875 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.136 [2024-07-15 20:52:57.858164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.136 20:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:54.396 20:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:54.396 20:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:56.338 20:52:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:58.270 20:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:58.270 [global] 00:16:58.270 thread=1 00:16:58.270 invalidate=1 00:16:58.270 rw=write 00:16:58.270 time_based=1 00:16:58.270 runtime=1 00:16:58.270 ioengine=libaio 00:16:58.270 direct=1 00:16:58.270 bs=4096 00:16:58.270 iodepth=1 00:16:58.270 norandommap=0 00:16:58.270 numjobs=1 00:16:58.270 00:16:58.270 verify_dump=1 00:16:58.270 verify_backlog=512 00:16:58.270 verify_state_save=0 00:16:58.270 do_verify=1 00:16:58.270 verify=crc32c-intel 00:16:58.270 [job0] 00:16:58.270 filename=/dev/nvme0n1 00:16:58.270 [job1] 00:16:58.270 filename=/dev/nvme0n2 00:16:58.270 [job2] 00:16:58.270 filename=/dev/nvme0n3 00:16:58.270 [job3] 00:16:58.270 filename=/dev/nvme0n4 00:16:58.270 Could not set queue depth (nvme0n1) 00:16:58.270 Could not set queue depth (nvme0n2) 00:16:58.270 Could not set queue depth (nvme0n3) 00:16:58.270 Could not set queue depth (nvme0n4) 00:16:58.534 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.534 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.534 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.534 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.534 fio-3.35 00:16:58.534 Starting 4 threads 00:16:59.938 00:16:59.938 job0: (groupid=0, jobs=1): err= 0: pid=1564469: Mon Jul 15 20:53:03 2024 00:16:59.938 read: IOPS=453, BW=1812KiB/s (1856kB/s)(1816KiB/1002msec) 00:16:59.938 slat (nsec): min=6935, max=62865, avg=26491.20, stdev=4948.62 00:16:59.938 clat (usec): min=881, max=1398, avg=1157.71, stdev=64.36 00:16:59.938 lat (usec): min=908, max=1424, avg=1184.20, stdev=65.04 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1106], 00:16:59.938 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:16:59.938 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:16:59.938 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1401], 99.95th=[ 1401], 00:16:59.938 | 99.99th=[ 1401] 00:16:59.938 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:59.938 slat (usec): min=9, max=2915, avg=43.96, stdev=164.90 00:16:59.938 clat (usec): min=482, max=2032, avg=845.47, stdev=132.79 00:16:59.938 lat (usec): min=495, max=3621, avg=889.43, stdev=211.76 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 537], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 742], 00:16:59.938 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 857], 60.00th=[ 898], 00:16:59.938 | 70.00th=[ 930], 80.00th=[ 955], 90.00th=[ 988], 95.00th=[ 1012], 00:16:59.938 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 2040], 99.95th=[ 2040], 00:16:59.938 | 99.99th=[ 2040] 00:16:59.938 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.938 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.938 lat (usec) : 500=0.10%, 750=12.22%, 1000=37.68% 00:16:59.938 lat (msec) : 2=49.90%, 4=0.10% 00:16:59.938 cpu : usr=1.90%, sys=4.10%, ctx=972, majf=0, minf=1 00:16:59.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 issued rwts: total=454,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.938 job1: (groupid=0, jobs=1): err= 0: pid=1564470: Mon Jul 15 20:53:03 2024 00:16:59.938 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:59.938 slat (nsec): min=6623, max=66571, avg=25024.59, stdev=6306.84 00:16:59.938 clat (usec): min=319, max=1953, avg=843.03, stdev=208.53 00:16:59.938 lat (usec): min=345, max=1980, avg=868.05, stdev=209.62 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 482], 5.00th=[ 529], 10.00th=[ 594], 20.00th=[ 668], 00:16:59.938 | 30.00th=[ 717], 40.00th=[ 783], 50.00th=[ 824], 60.00th=[ 865], 00:16:59.938 | 70.00th=[ 922], 80.00th=[ 1045], 90.00th=[ 1123], 95.00th=[ 1188], 00:16:59.938 | 99.00th=[ 1303], 99.50th=[ 1631], 99.90th=[ 1958], 99.95th=[ 1958], 00:16:59.938 | 99.99th=[ 1958] 00:16:59.938 write: IOPS=1014, BW=4060KiB/s (4157kB/s)(4064KiB/1001msec); 0 zone resets 00:16:59.938 slat (nsec): min=8892, max=67732, avg=27512.19, stdev=11413.41 00:16:59.938 clat (usec): min=133, max=1929, avg=507.70, stdev=184.56 00:16:59.938 lat (usec): min=145, max=1962, avg=535.21, stdev=186.83 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 139], 5.00th=[ 165], 10.00th=[ 253], 20.00th=[ 343], 00:16:59.938 | 30.00th=[ 420], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 578], 00:16:59.938 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 742], 00:16:59.938 | 99.00th=[ 848], 99.50th=[ 1319], 99.90th=[ 1467], 99.95th=[ 1926], 00:16:59.938 | 99.99th=[ 1926] 00:16:59.938 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.938 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.938 lat (usec) : 250=6.28%, 500=22.05%, 750=47.45%, 1000=15.58% 00:16:59.938 lat (msec) : 2=8.64% 00:16:59.938 cpu : usr=2.50%, sys=5.80%, ctx=1530, majf=0, minf=1 00:16:59.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 issued rwts: total=512,1016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.938 job2: (groupid=0, jobs=1): err= 0: pid=1564471: Mon Jul 15 20:53:03 2024 00:16:59.938 read: IOPS=137, BW=549KiB/s (562kB/s)(564KiB/1028msec) 00:16:59.938 slat (nsec): min=26800, max=46143, avg=27891.72, stdev=3212.62 00:16:59.938 clat (usec): min=862, max=42940, avg=4621.25, stdev=11540.77 00:16:59.938 lat (usec): min=902, max=42968, avg=4649.14, stdev=11540.49 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 881], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1057], 00:16:59.938 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:16:59.938 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[42206], 00:16:59.938 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:59.938 | 99.99th=[42730] 00:16:59.938 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:16:59.938 slat (usec): min=9, max=2776, avg=39.24, stdev=154.66 00:16:59.938 clat (usec): min=285, max=1741, avg=674.97, stdev=165.23 00:16:59.938 lat (usec): min=295, max=3786, avg=714.20, stdev=237.07 00:16:59.938 clat percentiles (usec): 00:16:59.938 | 1.00th=[ 371], 5.00th=[ 429], 10.00th=[ 490], 20.00th=[ 537], 00:16:59.938 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 701], 00:16:59.938 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 914], 00:16:59.938 | 99.00th=[ 1090], 99.50th=[ 1287], 99.90th=[ 1745], 99.95th=[ 1745], 00:16:59.938 | 99.99th=[ 1745] 00:16:59.938 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.938 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.938 lat (usec) : 500=9.65%, 750=44.10%, 1000=24.66% 00:16:59.938 lat (msec) : 2=19.75%, 50=1.84% 00:16:59.938 cpu : usr=1.36%, sys=2.34%, ctx=657, majf=0, minf=1 00:16:59.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.938 issued rwts: total=141,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.938 job3: (groupid=0, jobs=1): err= 0: pid=1564472: Mon Jul 15 20:53:03 2024 00:16:59.939 read: IOPS=14, BW=58.5KiB/s (59.9kB/s)(60.0KiB/1026msec) 00:16:59.939 slat (nsec): min=26449, max=26886, avg=26671.27, stdev=147.27 00:16:59.939 clat (usec): min=41998, max=43028, avg=42785.97, stdev=360.20 00:16:59.939 lat (usec): min=42024, max=43055, avg=42812.64, stdev=360.18 00:16:59.939 clat percentiles (usec): 00:16:59.939 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:16:59.939 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:16:59.939 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:16:59.939 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:59.939 | 99.99th=[43254] 00:16:59.939 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:59.939 slat (usec): min=9, max=44437, avg=119.34, stdev=1962.46 00:16:59.939 clat (usec): min=262, max=2240, avg=619.12, stdev=204.74 00:16:59.939 lat (usec): min=296, max=45190, avg=738.47, stdev=1979.21 00:16:59.939 clat percentiles (usec): 00:16:59.939 | 1.00th=[ 367], 5.00th=[ 400], 10.00th=[ 429], 20.00th=[ 486], 00:16:59.939 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 619], 00:16:59.939 | 70.00th=[ 676], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 857], 00:16:59.939 | 99.00th=[ 1434], 99.50th=[ 2040], 99.90th=[ 2245], 99.95th=[ 2245], 00:16:59.939 | 99.99th=[ 2245] 00:16:59.939 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.939 lat (usec) : 500=23.91%, 750=53.89%, 1000=17.08% 00:16:59.939 lat (msec) : 2=1.71%, 4=0.57%, 50=2.85% 00:16:59.939 cpu : usr=0.78%, sys=2.24%, ctx=530, majf=0, minf=1 00:16:59.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.939 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.939 00:16:59.939 Run status group 0 (all jobs): 00:16:59.939 READ: bw=4366KiB/s (4471kB/s), 58.5KiB/s-2046KiB/s (59.9kB/s-2095kB/s), io=4488KiB (4596kB), run=1001-1028msec 00:16:59.939 WRITE: bw=9930KiB/s (10.2MB/s), 1992KiB/s-4060KiB/s (2040kB/s-4157kB/s), io=9.97MiB (10.5MB), run=1001-1028msec 00:16:59.939 00:16:59.939 Disk stats (read/write): 00:16:59.939 nvme0n1: ios=375/512, merge=0/0, ticks=540/355, in_queue=895, util=86.77% 00:16:59.939 nvme0n2: ios=534/736, merge=0/0, ticks=1273/299, in_queue=1572, util=88.06% 00:16:59.939 nvme0n3: ios=195/512, merge=0/0, ticks=642/305, in_queue=947, util=92.19% 00:16:59.939 nvme0n4: ios=58/512, merge=0/0, ticks=745/262, in_queue=1007, util=97.01% 00:16:59.939 20:53:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:59.939 [global] 00:16:59.939 thread=1 00:16:59.939 invalidate=1 00:16:59.939 rw=randwrite 00:16:59.939 time_based=1 00:16:59.939 runtime=1 00:16:59.939 ioengine=libaio 00:16:59.939 direct=1 00:16:59.939 bs=4096 00:16:59.939 iodepth=1 00:16:59.939 norandommap=0 00:16:59.939 numjobs=1 00:16:59.939 00:16:59.939 verify_dump=1 00:16:59.939 verify_backlog=512 00:16:59.939 verify_state_save=0 00:16:59.939 do_verify=1 00:16:59.939 verify=crc32c-intel 00:16:59.939 [job0] 00:16:59.939 filename=/dev/nvme0n1 00:16:59.939 [job1] 00:16:59.939 filename=/dev/nvme0n2 00:16:59.939 [job2] 00:16:59.939 filename=/dev/nvme0n3 00:16:59.939 [job3] 00:16:59.939 filename=/dev/nvme0n4 00:16:59.939 Could not set queue depth (nvme0n1) 00:16:59.939 Could not set queue depth (nvme0n2) 00:16:59.939 Could not set queue depth (nvme0n3) 00:16:59.939 Could not set queue depth (nvme0n4) 00:17:00.209 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.209 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.209 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.209 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.209 fio-3.35 00:17:00.209 Starting 4 threads 00:17:01.625 00:17:01.625 job0: (groupid=0, jobs=1): err= 0: pid=1564999: Mon Jul 15 20:53:05 2024 00:17:01.625 read: IOPS=13, BW=53.8KiB/s (55.1kB/s)(56.0KiB/1041msec) 00:17:01.625 slat (nsec): min=24863, max=25596, avg=25271.50, stdev=198.03 00:17:01.625 clat (usec): min=41846, max=43090, avg=42413.34, stdev=546.41 00:17:01.625 lat (usec): min=41872, max=43115, avg=42438.61, stdev=546.42 00:17:01.625 clat percentiles (usec): 00:17:01.625 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:01.625 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:17:01.625 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:17:01.625 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:01.625 | 99.99th=[43254] 00:17:01.625 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:01.625 slat (nsec): min=8749, max=67348, avg=30559.10, stdev=6865.30 00:17:01.625 clat (usec): min=493, max=1169, avg=833.60, stdev=107.37 00:17:01.625 lat (usec): min=503, max=1200, avg=864.16, stdev=109.11 00:17:01.625 clat percentiles (usec): 00:17:01.625 | 1.00th=[ 553], 5.00th=[ 660], 10.00th=[ 693], 20.00th=[ 742], 00:17:01.625 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 865], 00:17:01.625 | 70.00th=[ 889], 80.00th=[ 930], 90.00th=[ 971], 95.00th=[ 1004], 00:17:01.625 | 99.00th=[ 1045], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:01.625 | 99.99th=[ 1172] 00:17:01.625 bw ( KiB/s): min= 4096, max= 4096, per=50.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:01.625 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:01.625 lat (usec) : 500=0.19%, 750=20.72%, 1000=71.29% 00:17:01.625 lat (msec) : 2=5.13%, 50=2.66% 00:17:01.625 cpu : usr=1.06%, sys=1.92%, ctx=527, majf=0, minf=1 00:17:01.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.625 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.625 job1: (groupid=0, jobs=1): err= 0: pid=1565000: Mon Jul 15 20:53:05 2024 00:17:01.625 read: IOPS=172, BW=691KiB/s (708kB/s)(712KiB/1030msec) 00:17:01.625 slat (nsec): min=7748, max=55989, avg=24849.11, stdev=5553.95 00:17:01.625 clat (usec): min=527, max=43146, avg=3478.24, stdev=9517.97 00:17:01.625 lat (usec): min=537, max=43169, avg=3503.09, stdev=9517.76 00:17:01.625 clat percentiles (usec): 00:17:01.625 | 1.00th=[ 889], 5.00th=[ 979], 10.00th=[ 1004], 20.00th=[ 1057], 00:17:01.625 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:17:01.625 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[41681], 00:17:01.625 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:01.625 | 99.99th=[43254] 00:17:01.625 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:17:01.625 slat (nsec): min=8936, max=49806, avg=27032.82, stdev=8499.33 00:17:01.625 clat (usec): min=321, max=2052, avg=755.62, stdev=123.82 00:17:01.625 lat (usec): min=330, max=2084, avg=782.66, stdev=126.94 00:17:01.625 clat percentiles (usec): 00:17:01.625 | 1.00th=[ 465], 5.00th=[ 553], 10.00th=[ 603], 20.00th=[ 668], 00:17:01.625 | 30.00th=[ 701], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 791], 00:17:01.625 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 922], 00:17:01.625 | 99.00th=[ 988], 99.50th=[ 1057], 99.90th=[ 2057], 99.95th=[ 2057], 00:17:01.625 | 99.99th=[ 2057] 00:17:01.625 bw ( KiB/s): min= 4096, max= 4096, per=50.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:01.625 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:01.625 lat (usec) : 500=1.30%, 750=31.59%, 1000=42.61% 00:17:01.625 lat (msec) : 2=22.90%, 4=0.14%, 50=1.45% 00:17:01.625 cpu : usr=0.78%, sys=2.04%, ctx=690, majf=0, minf=1 00:17:01.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.625 issued rwts: total=178,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.625 job2: (groupid=0, jobs=1): err= 0: pid=1565001: Mon Jul 15 20:53:05 2024 00:17:01.625 read: IOPS=394, BW=1578KiB/s (1616kB/s)(1580KiB/1001msec) 00:17:01.625 slat (nsec): min=25996, max=62657, avg=27092.56, stdev=3608.23 00:17:01.625 clat (usec): min=958, max=1502, avg=1314.30, stdev=81.63 00:17:01.625 lat (usec): min=986, max=1528, avg=1341.39, stdev=81.51 00:17:01.625 clat percentiles (usec): 00:17:01.625 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1205], 20.00th=[ 1254], 00:17:01.625 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1336], 00:17:01.626 | 70.00th=[ 1369], 80.00th=[ 1385], 90.00th=[ 1418], 95.00th=[ 1418], 00:17:01.626 | 99.00th=[ 1467], 99.50th=[ 1483], 99.90th=[ 1500], 99.95th=[ 1500], 00:17:01.626 | 99.99th=[ 1500] 00:17:01.626 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:01.626 slat (nsec): min=9993, max=71827, avg=34115.31, stdev=5014.76 00:17:01.626 clat (usec): min=624, max=1085, avg=868.56, stdev=82.27 00:17:01.626 lat (usec): min=658, max=1119, avg=902.67, stdev=82.84 00:17:01.626 clat percentiles (usec): 00:17:01.626 | 1.00th=[ 676], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 791], 00:17:01.626 | 30.00th=[ 824], 40.00th=[ 857], 50.00th=[ 881], 60.00th=[ 898], 00:17:01.626 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 1004], 00:17:01.626 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:01.626 | 99.99th=[ 1090] 00:17:01.626 bw ( KiB/s): min= 4096, max= 4096, per=50.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:01.626 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:01.626 lat (usec) : 750=4.41%, 1000=49.17% 00:17:01.626 lat (msec) : 2=46.42% 00:17:01.626 cpu : usr=2.00%, sys=3.80%, ctx=909, majf=0, minf=1 00:17:01.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.626 issued rwts: total=395,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.626 job3: (groupid=0, jobs=1): err= 0: pid=1565002: Mon Jul 15 20:53:05 2024 00:17:01.626 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:01.626 slat (nsec): min=7903, max=56542, avg=26108.50, stdev=3271.48 00:17:01.626 clat (usec): min=555, max=1401, avg=1149.08, stdev=97.50 00:17:01.626 lat (usec): min=581, max=1427, avg=1175.19, stdev=97.58 00:17:01.626 clat percentiles (usec): 00:17:01.626 | 1.00th=[ 832], 5.00th=[ 996], 10.00th=[ 1037], 20.00th=[ 1090], 00:17:01.626 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:17:01.626 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:17:01.626 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:01.626 | 99.99th=[ 1401] 00:17:01.626 write: IOPS=554, BW=2218KiB/s (2271kB/s)(2220KiB/1001msec); 0 zone resets 00:17:01.626 slat (nsec): min=8772, max=63594, avg=27557.27, stdev=9549.34 00:17:01.626 clat (usec): min=304, max=1057, avg=674.63, stdev=132.88 00:17:01.626 lat (usec): min=314, max=1089, avg=702.19, stdev=136.19 00:17:01.626 clat percentiles (usec): 00:17:01.626 | 1.00th=[ 429], 5.00th=[ 510], 10.00th=[ 529], 20.00th=[ 553], 00:17:01.626 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 644], 60.00th=[ 668], 00:17:01.626 | 70.00th=[ 717], 80.00th=[ 799], 90.00th=[ 898], 95.00th=[ 947], 00:17:01.626 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:17:01.626 | 99.99th=[ 1057] 00:17:01.626 bw ( KiB/s): min= 4096, max= 4096, per=50.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:01.626 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:01.626 lat (usec) : 500=1.97%, 750=36.83%, 1000=15.56% 00:17:01.626 lat (msec) : 2=45.64% 00:17:01.626 cpu : usr=2.10%, sys=4.00%, ctx=1067, majf=0, minf=1 00:17:01.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.626 issued rwts: total=512,555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.626 00:17:01.626 Run status group 0 (all jobs): 00:17:01.626 READ: bw=4223KiB/s (4324kB/s), 53.8KiB/s-2046KiB/s (55.1kB/s-2095kB/s), io=4396KiB (4502kB), run=1001-1041msec 00:17:01.626 WRITE: bw=8035KiB/s (8227kB/s), 1967KiB/s-2218KiB/s (2015kB/s-2271kB/s), io=8364KiB (8565kB), run=1001-1041msec 00:17:01.626 00:17:01.626 Disk stats (read/write): 00:17:01.626 nvme0n1: ios=29/512, merge=0/0, ticks=425/305, in_queue=730, util=84.87% 00:17:01.626 nvme0n2: ios=222/512, merge=0/0, ticks=671/375, in_queue=1046, util=93.99% 00:17:01.626 nvme0n3: ios=297/512, merge=0/0, ticks=1258/326, in_queue=1584, util=96.94% 00:17:01.626 nvme0n4: ios=405/512, merge=0/0, ticks=417/293, in_queue=710, util=89.54% 00:17:01.626 20:53:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:01.626 [global] 00:17:01.626 thread=1 00:17:01.626 invalidate=1 00:17:01.626 rw=write 00:17:01.626 time_based=1 00:17:01.626 runtime=1 00:17:01.626 ioengine=libaio 00:17:01.626 direct=1 00:17:01.626 bs=4096 00:17:01.626 iodepth=128 00:17:01.626 norandommap=0 00:17:01.626 numjobs=1 00:17:01.626 00:17:01.626 verify_dump=1 00:17:01.626 verify_backlog=512 00:17:01.626 verify_state_save=0 00:17:01.626 do_verify=1 00:17:01.626 verify=crc32c-intel 00:17:01.626 [job0] 00:17:01.626 filename=/dev/nvme0n1 00:17:01.626 [job1] 00:17:01.626 filename=/dev/nvme0n2 00:17:01.626 [job2] 00:17:01.626 filename=/dev/nvme0n3 00:17:01.626 [job3] 00:17:01.626 filename=/dev/nvme0n4 00:17:01.626 Could not set queue depth (nvme0n1) 00:17:01.626 Could not set queue depth (nvme0n2) 00:17:01.626 Could not set queue depth (nvme0n3) 00:17:01.626 Could not set queue depth (nvme0n4) 00:17:01.891 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.891 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.891 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.891 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.891 fio-3.35 00:17:01.891 Starting 4 threads 00:17:03.296 00:17:03.296 job0: (groupid=0, jobs=1): err= 0: pid=1565518: Mon Jul 15 20:53:06 2024 00:17:03.296 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:17:03.296 slat (nsec): min=908, max=14132k, avg=71409.49, stdev=553435.26 00:17:03.296 clat (usec): min=1279, max=38117, avg=9860.01, stdev=4901.56 00:17:03.296 lat (usec): min=1302, max=38124, avg=9931.42, stdev=4941.05 00:17:03.296 clat percentiles (usec): 00:17:03.296 | 1.00th=[ 2606], 5.00th=[ 4621], 10.00th=[ 5604], 20.00th=[ 6521], 00:17:03.296 | 30.00th=[ 7242], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9110], 00:17:03.296 | 70.00th=[10683], 80.00th=[12911], 90.00th=[15139], 95.00th=[21890], 00:17:03.296 | 99.00th=[27395], 99.50th=[28705], 99.90th=[35914], 99.95th=[38011], 00:17:03.296 | 99.99th=[38011] 00:17:03.296 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(25.6MiB/1004msec); 0 zone resets 00:17:03.296 slat (nsec): min=1581, max=18432k, avg=70712.15, stdev=461841.86 00:17:03.296 clat (usec): min=712, max=35644, avg=9879.15, stdev=5953.00 00:17:03.296 lat (usec): min=738, max=35646, avg=9949.86, stdev=5970.32 00:17:03.296 clat percentiles (usec): 00:17:03.296 | 1.00th=[ 2245], 5.00th=[ 3458], 10.00th=[ 4178], 20.00th=[ 5407], 00:17:03.296 | 30.00th=[ 6128], 40.00th=[ 7177], 50.00th=[ 8029], 60.00th=[ 9634], 00:17:03.296 | 70.00th=[11469], 80.00th=[13566], 90.00th=[16909], 95.00th=[21103], 00:17:03.296 | 99.00th=[32375], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:17:03.296 | 99.99th=[35390] 00:17:03.296 bw ( KiB/s): min=24008, max=27384, per=30.65%, avg=25696.00, stdev=2387.19, samples=2 00:17:03.296 iops : min= 6002, max= 6846, avg=6424.00, stdev=596.80, samples=2 00:17:03.296 lat (usec) : 750=0.02%, 1000=0.01% 00:17:03.296 lat (msec) : 2=0.71%, 4=4.55%, 10=59.20%, 20=29.34%, 50=6.17% 00:17:03.296 cpu : usr=4.49%, sys=5.08%, ctx=756, majf=0, minf=1 00:17:03.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:03.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:03.296 issued rwts: total=6144,6551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:03.296 job1: (groupid=0, jobs=1): err= 0: pid=1565519: Mon Jul 15 20:53:06 2024 00:17:03.296 read: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1004msec) 00:17:03.296 slat (nsec): min=862, max=22735k, avg=109918.42, stdev=913210.84 00:17:03.296 clat (usec): min=909, max=72293, avg=14760.97, stdev=11411.79 00:17:03.296 lat (usec): min=2654, max=72299, avg=14870.88, stdev=11498.60 00:17:03.296 clat percentiles (usec): 00:17:03.296 | 1.00th=[ 3490], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 7504], 00:17:03.296 | 30.00th=[ 8160], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[11994], 00:17:03.296 | 70.00th=[14877], 80.00th=[19792], 90.00th=[30016], 95.00th=[41681], 00:17:03.296 | 99.00th=[57934], 99.50th=[60031], 99.90th=[71828], 99.95th=[71828], 00:17:03.296 | 99.99th=[71828] 00:17:03.296 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:03.296 slat (nsec): min=1502, max=13824k, avg=126708.36, stdev=751441.61 00:17:03.296 clat (usec): min=845, max=100060, avg=17056.19, stdev=16062.22 00:17:03.296 lat (usec): min=853, max=100070, avg=17182.90, stdev=16171.73 00:17:03.296 clat percentiles (msec): 00:17:03.296 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:17:03.296 | 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:17:03.296 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 34], 95.00th=[ 51], 00:17:03.296 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 101], 99.95th=[ 101], 00:17:03.296 | 99.99th=[ 101] 00:17:03.296 bw ( KiB/s): min=12536, max=20232, per=19.54%, avg=16384.00, stdev=5441.89, samples=2 00:17:03.296 iops : min= 3134, max= 5058, avg=4096.00, stdev=1360.47, samples=2 00:17:03.296 lat (usec) : 1000=0.10% 00:17:03.296 lat (msec) : 2=0.41%, 4=1.74%, 10=38.32%, 20=39.99%, 50=15.73% 00:17:03.296 lat (msec) : 100=3.62%, 250=0.08% 00:17:03.296 cpu : usr=2.19%, sys=3.59%, ctx=459, majf=0, minf=1 00:17:03.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:03.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:03.296 issued rwts: total=3878,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:03.296 job2: (groupid=0, jobs=1): err= 0: pid=1565520: Mon Jul 15 20:53:06 2024 00:17:03.296 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:17:03.296 slat (nsec): min=898, max=45934k, avg=95256.40, stdev=902608.65 00:17:03.296 clat (usec): min=672, max=64344, avg=13444.14, stdev=8127.46 00:17:03.296 lat (usec): min=678, max=64383, avg=13539.40, stdev=8174.99 00:17:03.296 clat percentiles (usec): 00:17:03.296 | 1.00th=[ 2114], 5.00th=[ 5735], 10.00th=[ 8094], 20.00th=[ 9241], 00:17:03.296 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11863], 60.00th=[12911], 00:17:03.296 | 70.00th=[13698], 80.00th=[15926], 90.00th=[19268], 95.00th=[22414], 00:17:03.296 | 99.00th=[57410], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:17:03.296 | 99.99th=[64226] 00:17:03.296 write: IOPS=5333, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1009msec); 0 zone resets 00:17:03.296 slat (nsec): min=1566, max=11647k, avg=78402.41, stdev=494387.46 00:17:03.296 clat (usec): min=839, max=37893, avg=11008.48, stdev=6076.63 00:17:03.296 lat (usec): min=852, max=37895, avg=11086.88, stdev=6100.94 00:17:03.296 clat percentiles (usec): 00:17:03.296 | 1.00th=[ 1532], 5.00th=[ 2835], 10.00th=[ 5276], 20.00th=[ 6980], 00:17:03.296 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10945], 00:17:03.296 | 70.00th=[12125], 80.00th=[13960], 90.00th=[15270], 95.00th=[23200], 00:17:03.296 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:17:03.296 | 99.99th=[38011] 00:17:03.296 bw ( KiB/s): min=20104, max=21928, per=25.07%, avg=21016.00, stdev=1289.76, samples=2 00:17:03.296 iops : min= 5026, max= 5482, avg=5254.00, stdev=322.44, samples=2 00:17:03.296 lat (usec) : 750=0.03%, 1000=0.03% 00:17:03.296 lat (msec) : 2=1.76%, 4=3.54%, 10=33.32%, 20=54.45%, 50=5.66% 00:17:03.297 lat (msec) : 100=1.21% 00:17:03.297 cpu : usr=3.47%, sys=5.85%, ctx=489, majf=0, minf=1 00:17:03.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:03.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:03.297 issued rwts: total=5120,5381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:03.297 job3: (groupid=0, jobs=1): err= 0: pid=1565521: Mon Jul 15 20:53:06 2024 00:17:03.297 read: IOPS=4662, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1002msec) 00:17:03.297 slat (nsec): min=908, max=12981k, avg=101059.39, stdev=675434.59 00:17:03.297 clat (usec): min=1138, max=44876, avg=12729.22, stdev=7031.29 00:17:03.297 lat (usec): min=2680, max=44901, avg=12830.28, stdev=7088.69 00:17:03.297 clat percentiles (usec): 00:17:03.297 | 1.00th=[ 6259], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8455], 00:17:03.297 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11207], 00:17:03.297 | 70.00th=[12387], 80.00th=[14877], 90.00th=[23987], 95.00th=[28967], 00:17:03.297 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[43779], 00:17:03.297 | 99.99th=[44827] 00:17:03.297 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:17:03.297 slat (nsec): min=1578, max=42174k, avg=99172.42, stdev=833151.24 00:17:03.297 clat (usec): min=4804, max=56919, avg=11689.85, stdev=5196.51 00:17:03.297 lat (usec): min=4807, max=71226, avg=11789.02, stdev=5304.80 00:17:03.297 clat percentiles (usec): 00:17:03.297 | 1.00th=[ 5473], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 7963], 00:17:03.297 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10814], 00:17:03.297 | 70.00th=[11994], 80.00th=[14746], 90.00th=[20055], 95.00th=[23200], 00:17:03.297 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[32113], 00:17:03.297 | 99.99th=[56886] 00:17:03.297 bw ( KiB/s): min=15872, max=24584, per=24.13%, avg=20228.00, stdev=6160.31, samples=2 00:17:03.297 iops : min= 3968, max= 6146, avg=5057.00, stdev=1540.08, samples=2 00:17:03.297 lat (msec) : 2=0.01%, 4=0.09%, 10=48.87%, 20=39.64%, 50=11.38% 00:17:03.297 lat (msec) : 100=0.01% 00:17:03.297 cpu : usr=2.80%, sys=3.80%, ctx=508, majf=0, minf=1 00:17:03.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:03.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:03.297 issued rwts: total=4672,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:03.297 00:17:03.297 Run status group 0 (all jobs): 00:17:03.297 READ: bw=76.7MiB/s (80.4MB/s), 15.1MiB/s-23.9MiB/s (15.8MB/s-25.1MB/s), io=77.4MiB (81.2MB), run=1002-1009msec 00:17:03.297 WRITE: bw=81.9MiB/s (85.8MB/s), 15.9MiB/s-25.5MiB/s (16.7MB/s-26.7MB/s), io=82.6MiB (86.6MB), run=1002-1009msec 00:17:03.297 00:17:03.297 Disk stats (read/write): 00:17:03.297 nvme0n1: ios=5146/5494, merge=0/0, ticks=34370/39293, in_queue=73663, util=91.08% 00:17:03.297 nvme0n2: ios=3604/3911, merge=0/0, ticks=25072/22933, in_queue=48005, util=85.63% 00:17:03.297 nvme0n3: ios=4296/4602, merge=0/0, ticks=39245/35328, in_queue=74573, util=92.83% 00:17:03.297 nvme0n4: ios=3606/3852, merge=0/0, ticks=25826/23023, in_queue=48849, util=99.68% 00:17:03.297 20:53:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:03.297 [global] 00:17:03.297 thread=1 00:17:03.297 invalidate=1 00:17:03.297 rw=randwrite 00:17:03.297 time_based=1 00:17:03.297 runtime=1 00:17:03.297 ioengine=libaio 00:17:03.297 direct=1 00:17:03.297 bs=4096 00:17:03.297 iodepth=128 00:17:03.297 norandommap=0 00:17:03.297 numjobs=1 00:17:03.297 00:17:03.297 verify_dump=1 00:17:03.297 verify_backlog=512 00:17:03.297 verify_state_save=0 00:17:03.297 do_verify=1 00:17:03.297 verify=crc32c-intel 00:17:03.297 [job0] 00:17:03.297 filename=/dev/nvme0n1 00:17:03.297 [job1] 00:17:03.297 filename=/dev/nvme0n2 00:17:03.297 [job2] 00:17:03.297 filename=/dev/nvme0n3 00:17:03.297 [job3] 00:17:03.297 filename=/dev/nvme0n4 00:17:03.297 Could not set queue depth (nvme0n1) 00:17:03.297 Could not set queue depth (nvme0n2) 00:17:03.297 Could not set queue depth (nvme0n3) 00:17:03.297 Could not set queue depth (nvme0n4) 00:17:03.602 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:03.602 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:03.602 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:03.602 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:03.602 fio-3.35 00:17:03.602 Starting 4 threads 00:17:05.019 00:17:05.019 job0: (groupid=0, jobs=1): err= 0: pid=1566047: Mon Jul 15 20:53:08 2024 00:17:05.019 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:17:05.019 slat (nsec): min=891, max=16176k, avg=76903.73, stdev=686539.61 00:17:05.019 clat (usec): min=1280, max=41021, avg=11612.17, stdev=7157.50 00:17:05.019 lat (usec): min=1306, max=41046, avg=11689.07, stdev=7204.71 00:17:05.019 clat percentiles (usec): 00:17:05.019 | 1.00th=[ 2180], 5.00th=[ 3884], 10.00th=[ 5407], 20.00th=[ 6587], 00:17:05.019 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 9241], 60.00th=[10552], 00:17:05.019 | 70.00th=[12780], 80.00th=[16909], 90.00th=[23462], 95.00th=[27132], 00:17:05.019 | 99.00th=[35390], 99.50th=[37487], 99.90th=[37487], 99.95th=[39584], 00:17:05.019 | 99.99th=[41157] 00:17:05.019 write: IOPS=5466, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec); 0 zone resets 00:17:05.019 slat (nsec): min=1516, max=12126k, avg=95406.63, stdev=665136.79 00:17:05.019 clat (usec): min=990, max=78277, avg=12354.63, stdev=11653.82 00:17:05.019 lat (usec): min=1002, max=78284, avg=12450.04, stdev=11733.49 00:17:05.019 clat percentiles (usec): 00:17:05.019 | 1.00th=[ 2802], 5.00th=[ 4555], 10.00th=[ 5473], 20.00th=[ 6587], 00:17:05.019 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10290], 00:17:05.019 | 70.00th=[12125], 80.00th=[14353], 90.00th=[19268], 95.00th=[30016], 00:17:05.019 | 99.00th=[72877], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:17:05.019 | 99.99th=[78119] 00:17:05.019 bw ( KiB/s): min=20480, max=20480, per=27.45%, avg=20480.00, stdev= 0.00, samples=1 00:17:05.019 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:17:05.019 lat (usec) : 1000=0.03% 00:17:05.019 lat (msec) : 2=0.24%, 4=3.92%, 10=52.75%, 20=32.09%, 50=9.25% 00:17:05.019 lat (msec) : 100=1.73% 00:17:05.019 cpu : usr=4.00%, sys=4.80%, ctx=425, majf=0, minf=1 00:17:05.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:05.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.020 issued rwts: total=5120,5477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.020 job1: (groupid=0, jobs=1): err= 0: pid=1566048: Mon Jul 15 20:53:08 2024 00:17:05.020 read: IOPS=4030, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:17:05.020 slat (nsec): min=910, max=30312k, avg=116873.54, stdev=1149331.72 00:17:05.020 clat (usec): min=1461, max=118427, avg=14159.57, stdev=13654.37 00:17:05.020 lat (usec): min=1489, max=118435, avg=14276.45, stdev=13794.60 00:17:05.020 clat percentiles (msec): 00:17:05.020 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:17:05.020 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:17:05.020 | 70.00th=[ 11], 80.00th=[ 18], 90.00th=[ 36], 95.00th=[ 41], 00:17:05.020 | 99.00th=[ 63], 99.50th=[ 104], 99.90th=[ 118], 99.95th=[ 118], 00:17:05.020 | 99.99th=[ 118] 00:17:05.020 write: IOPS=4280, BW=16.7MiB/s (17.5MB/s)(17.0MiB/1014msec); 0 zone resets 00:17:05.020 slat (nsec): min=1525, max=13578k, avg=102775.33, stdev=597548.17 00:17:05.020 clat (usec): min=582, max=120663, avg=16103.42, stdev=16575.84 00:17:05.020 lat (usec): min=997, max=120669, avg=16206.19, stdev=16580.95 00:17:05.020 clat percentiles (usec): 00:17:05.020 | 1.00th=[ 1450], 5.00th=[ 4080], 10.00th=[ 6259], 20.00th=[ 7308], 00:17:05.020 | 30.00th=[ 8717], 40.00th=[ 10159], 50.00th=[ 11863], 60.00th=[ 13566], 00:17:05.020 | 70.00th=[ 15401], 80.00th=[ 19268], 90.00th=[ 30802], 95.00th=[ 45876], 00:17:05.020 | 99.00th=[117965], 99.50th=[121111], 99.90th=[121111], 99.95th=[121111], 00:17:05.020 | 99.99th=[121111] 00:17:05.020 bw ( KiB/s): min=12248, max=21640, per=22.71%, avg=16944.00, stdev=6641.15, samples=2 00:17:05.020 iops : min= 3062, max= 5410, avg=4236.00, stdev=1660.29, samples=2 00:17:05.020 lat (usec) : 750=0.02%, 1000=0.01% 00:17:05.020 lat (msec) : 2=1.42%, 4=2.58%, 10=47.09%, 20=31.29%, 50=14.90% 00:17:05.020 lat (msec) : 100=1.64%, 250=1.04% 00:17:05.020 cpu : usr=2.86%, sys=3.55%, ctx=652, majf=0, minf=1 00:17:05.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:05.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.020 issued rwts: total=4087,4340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.020 job2: (groupid=0, jobs=1): err= 0: pid=1566049: Mon Jul 15 20:53:08 2024 00:17:05.020 read: IOPS=4049, BW=15.8MiB/s (16.6MB/s)(16.6MiB/1048msec) 00:17:05.020 slat (nsec): min=924, max=28051k, avg=90282.11, stdev=798462.90 00:17:05.020 clat (usec): min=1595, max=66611, avg=14602.87, stdev=11342.79 00:17:05.020 lat (usec): min=1606, max=68683, avg=14693.15, stdev=11384.54 00:17:05.020 clat percentiles (usec): 00:17:05.020 | 1.00th=[ 2311], 5.00th=[ 3261], 10.00th=[ 5932], 20.00th=[ 8455], 00:17:05.020 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11338], 60.00th=[13173], 00:17:05.020 | 70.00th=[14877], 80.00th=[18220], 90.00th=[24511], 95.00th=[34866], 00:17:05.020 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:17:05.020 | 99.99th=[66847] 00:17:05.020 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:17:05.020 slat (nsec): min=1558, max=17979k, avg=120733.03, stdev=873852.44 00:17:05.020 clat (usec): min=746, max=140458, avg=15399.77, stdev=21221.59 00:17:05.020 lat (usec): min=1243, max=140469, avg=15520.51, stdev=21362.43 00:17:05.020 clat percentiles (usec): 00:17:05.020 | 1.00th=[ 1647], 5.00th=[ 3064], 10.00th=[ 3884], 20.00th=[ 6390], 00:17:05.020 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 10290], 60.00th=[ 11338], 00:17:05.020 | 70.00th=[ 13304], 80.00th=[ 16581], 90.00th=[ 20841], 95.00th=[ 56886], 00:17:05.020 | 99.00th=[124257], 99.50th=[129500], 99.90th=[139461], 99.95th=[139461], 00:17:05.020 | 99.99th=[139461] 00:17:05.020 bw ( KiB/s): min=12280, max=24576, per=24.70%, avg=18428.00, stdev=8694.58, samples=2 00:17:05.020 iops : min= 3070, max= 6144, avg=4607.00, stdev=2173.65, samples=2 00:17:05.020 lat (usec) : 750=0.01% 00:17:05.020 lat (msec) : 2=1.46%, 4=7.46%, 10=31.65%, 20=45.11%, 50=9.91% 00:17:05.020 lat (msec) : 100=3.06%, 250=1.34% 00:17:05.020 cpu : usr=2.67%, sys=4.87%, ctx=358, majf=0, minf=1 00:17:05.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:05.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.020 issued rwts: total=4244,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.020 job3: (groupid=0, jobs=1): err= 0: pid=1566050: Mon Jul 15 20:53:08 2024 00:17:05.020 read: IOPS=4760, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1009msec) 00:17:05.020 slat (nsec): min=962, max=17630k, avg=98524.87, stdev=860390.47 00:17:05.020 clat (usec): min=2851, max=38990, avg=14103.84, stdev=5637.08 00:17:05.020 lat (usec): min=2873, max=45743, avg=14202.36, stdev=5718.75 00:17:05.020 clat percentiles (usec): 00:17:05.020 | 1.00th=[ 3392], 5.00th=[ 5211], 10.00th=[ 7177], 20.00th=[ 9634], 00:17:05.020 | 30.00th=[11076], 40.00th=[12649], 50.00th=[13435], 60.00th=[14746], 00:17:05.020 | 70.00th=[16712], 80.00th=[19268], 90.00th=[21627], 95.00th=[23725], 00:17:05.020 | 99.00th=[29492], 99.50th=[29754], 99.90th=[33817], 99.95th=[38536], 00:17:05.020 | 99.99th=[39060] 00:17:05.020 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:17:05.020 slat (nsec): min=1566, max=20849k, avg=90219.96, stdev=762542.00 00:17:05.020 clat (usec): min=996, max=30845, avg=11767.55, stdev=5387.37 00:17:05.020 lat (usec): min=1012, max=30854, avg=11857.77, stdev=5413.11 00:17:05.020 clat percentiles (usec): 00:17:05.020 | 1.00th=[ 2147], 5.00th=[ 3556], 10.00th=[ 5866], 20.00th=[ 7570], 00:17:05.020 | 30.00th=[ 8291], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[12256], 00:17:05.020 | 70.00th=[14222], 80.00th=[16319], 90.00th=[19006], 95.00th=[22676], 00:17:05.020 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[29492], 00:17:05.020 | 99.99th=[30802] 00:17:05.020 bw ( KiB/s): min=16920, max=24040, per=27.45%, avg=20480.00, stdev=5034.60, samples=2 00:17:05.020 iops : min= 4230, max= 6010, avg=5120.00, stdev=1258.65, samples=2 00:17:05.020 lat (usec) : 1000=0.02% 00:17:05.020 lat (msec) : 2=0.31%, 4=4.01%, 10=30.77%, 20=52.00%, 50=12.89% 00:17:05.020 cpu : usr=4.46%, sys=5.26%, ctx=255, majf=0, minf=1 00:17:05.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:05.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.020 issued rwts: total=4803,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.020 00:17:05.020 Run status group 0 (all jobs): 00:17:05.020 READ: bw=68.0MiB/s (71.3MB/s), 15.7MiB/s-20.0MiB/s (16.5MB/s-20.9MB/s), io=71.3MiB (74.8MB), run=1002-1048msec 00:17:05.020 WRITE: bw=72.9MiB/s (76.4MB/s), 16.7MiB/s-21.4MiB/s (17.5MB/s-22.4MB/s), io=76.3MiB (80.1MB), run=1002-1048msec 00:17:05.020 00:17:05.020 Disk stats (read/write): 00:17:05.020 nvme0n1: ios=3635/3721, merge=0/0, ticks=38273/48833, in_queue=87106, util=95.69% 00:17:05.020 nvme0n2: ios=3099/3276, merge=0/0, ticks=26053/13977, in_queue=40030, util=96.57% 00:17:05.020 nvme0n3: ios=4283/4608, merge=0/0, ticks=41977/62339, in_queue=104316, util=96.69% 00:17:05.020 nvme0n4: ios=3731/4096, merge=0/0, ticks=47245/41520, in_queue=88765, util=100.00% 00:17:05.020 20:53:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:05.020 20:53:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1566378 00:17:05.020 20:53:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:05.020 20:53:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:05.020 [global] 00:17:05.020 thread=1 00:17:05.020 invalidate=1 00:17:05.020 rw=read 00:17:05.020 time_based=1 00:17:05.020 runtime=10 00:17:05.020 ioengine=libaio 00:17:05.020 direct=1 00:17:05.020 bs=4096 00:17:05.020 iodepth=1 00:17:05.020 norandommap=1 00:17:05.020 numjobs=1 00:17:05.020 00:17:05.020 [job0] 00:17:05.020 filename=/dev/nvme0n1 00:17:05.020 [job1] 00:17:05.020 filename=/dev/nvme0n2 00:17:05.020 [job2] 00:17:05.020 filename=/dev/nvme0n3 00:17:05.020 [job3] 00:17:05.020 filename=/dev/nvme0n4 00:17:05.020 Could not set queue depth (nvme0n1) 00:17:05.020 Could not set queue depth (nvme0n2) 00:17:05.020 Could not set queue depth (nvme0n3) 00:17:05.020 Could not set queue depth (nvme0n4) 00:17:05.283 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.283 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.283 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.283 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.283 fio-3.35 00:17:05.283 Starting 4 threads 00:17:07.848 20:53:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:08.107 20:53:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:08.107 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8155136, buflen=4096 00:17:08.107 fio: pid=1566575, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:08.107 20:53:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.107 20:53:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:08.107 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1839104, buflen=4096 00:17:08.107 fio: pid=1566574, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:08.366 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.366 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:08.366 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=290816, buflen=4096 00:17:08.366 fio: pid=1566572, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:08.625 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11042816, buflen=4096 00:17:08.625 fio: pid=1566573, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:08.625 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.625 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:08.625 00:17:08.625 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1566572: Mon Jul 15 20:53:12 2024 00:17:08.625 read: IOPS=24, BW=95.3KiB/s (97.6kB/s)(284KiB/2981msec) 00:17:08.625 slat (usec): min=24, max=11618, avg=237.11, stdev=1426.24 00:17:08.625 clat (usec): min=1045, max=43038, avg=41443.45, stdev=4875.86 00:17:08.625 lat (usec): min=1084, max=52949, avg=41683.53, stdev=5087.91 00:17:08.625 clat percentiles (usec): 00:17:08.625 | 1.00th=[ 1045], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:08.625 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:08.625 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:17:08.625 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:08.625 | 99.99th=[43254] 00:17:08.625 bw ( KiB/s): min= 96, max= 96, per=1.44%, avg=96.00, stdev= 0.00, samples=5 00:17:08.625 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:08.625 lat (msec) : 2=1.39%, 50=97.22% 00:17:08.625 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:17:08.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.625 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1566573: Mon Jul 15 20:53:12 2024 00:17:08.625 read: IOPS=863, BW=3451KiB/s (3534kB/s)(10.5MiB/3125msec) 00:17:08.625 slat (usec): min=6, max=12780, avg=47.08, stdev=477.55 00:17:08.625 clat (usec): min=507, max=3693, avg=1097.55, stdev=127.47 00:17:08.625 lat (usec): min=532, max=13943, avg=1144.64, stdev=497.14 00:17:08.625 clat percentiles (usec): 00:17:08.625 | 1.00th=[ 832], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 979], 00:17:08.625 | 30.00th=[ 1012], 40.00th=[ 1090], 50.00th=[ 1139], 60.00th=[ 1156], 00:17:08.625 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1254], 00:17:08.625 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1401], 99.95th=[ 1450], 00:17:08.625 | 99.99th=[ 3687] 00:17:08.625 bw ( KiB/s): min= 3022, max= 4024, per=52.41%, avg=3493.00, stdev=375.07, samples=6 00:17:08.625 iops : min= 755, max= 1006, avg=873.17, stdev=93.89, samples=6 00:17:08.625 lat (usec) : 750=0.52%, 1000=27.07% 00:17:08.625 lat (msec) : 2=72.34%, 4=0.04% 00:17:08.625 cpu : usr=1.15%, sys=3.30%, ctx=2703, majf=0, minf=1 00:17:08.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.625 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1566574: Mon Jul 15 20:53:12 2024 00:17:08.625 read: IOPS=160, BW=639KiB/s (654kB/s)(1796KiB/2810msec) 00:17:08.625 slat (nsec): min=7162, max=64877, avg=26844.70, stdev=4360.01 00:17:08.625 clat (usec): min=935, max=43019, avg=6173.06, stdev=13190.55 00:17:08.625 lat (usec): min=974, max=43045, avg=6199.90, stdev=13190.31 00:17:08.625 clat percentiles (usec): 00:17:08.625 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1237], 20.00th=[ 1287], 00:17:08.625 | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[ 1336], 60.00th=[ 1369], 00:17:08.625 | 70.00th=[ 1385], 80.00th=[ 1418], 90.00th=[41681], 95.00th=[42206], 00:17:08.625 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:17:08.625 | 99.99th=[43254] 00:17:08.625 bw ( KiB/s): min= 96, max= 2872, per=10.61%, avg=707.20, stdev=1216.22, samples=5 00:17:08.625 iops : min= 24, max= 718, avg=176.80, stdev=304.05, samples=5 00:17:08.625 lat (usec) : 1000=0.22% 00:17:08.625 lat (msec) : 2=87.33%, 4=0.22%, 20=0.22%, 50=11.78% 00:17:08.625 cpu : usr=0.21%, sys=0.68%, ctx=452, majf=0, minf=1 00:17:08.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.625 issued rwts: total=450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.625 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1566575: Mon Jul 15 20:53:12 2024 00:17:08.625 read: IOPS=755, BW=3019KiB/s (3091kB/s)(7964KiB/2638msec) 00:17:08.625 slat (nsec): min=6866, max=61660, avg=26079.13, stdev=4172.17 00:17:08.625 clat (usec): min=369, max=43056, avg=1281.46, stdev=3570.81 00:17:08.625 lat (usec): min=380, max=43080, avg=1307.54, stdev=3570.66 00:17:08.625 clat percentiles (usec): 00:17:08.625 | 1.00th=[ 758], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 938], 00:17:08.625 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:17:08.625 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:17:08.625 | 99.00th=[ 1369], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:17:08.625 | 99.99th=[43254] 00:17:08.625 bw ( KiB/s): min= 192, max= 4056, per=47.71%, avg=3180.80, stdev=1677.17, samples=5 00:17:08.625 iops : min= 48, max= 1014, avg=795.20, stdev=419.29, samples=5 00:17:08.625 lat (usec) : 500=0.35%, 750=0.55%, 1000=75.65% 00:17:08.625 lat (msec) : 2=22.64%, 50=0.75% 00:17:08.626 cpu : usr=0.64%, sys=2.50%, ctx=1992, majf=0, minf=2 00:17:08.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.626 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.626 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.626 00:17:08.626 Run status group 0 (all jobs): 00:17:08.626 READ: bw=6665KiB/s (6825kB/s), 95.3KiB/s-3451KiB/s (97.6kB/s-3534kB/s), io=20.3MiB (21.3MB), run=2638-3125msec 00:17:08.626 00:17:08.626 Disk stats (read/write): 00:17:08.626 nvme0n1: ios=68/0, merge=0/0, ticks=2818/0, in_queue=2818, util=94.46% 00:17:08.626 nvme0n2: ios=2684/0, merge=0/0, ticks=2695/0, in_queue=2695, util=94.02% 00:17:08.626 nvme0n3: ios=482/0, merge=0/0, ticks=3233/0, in_queue=3233, util=99.85% 00:17:08.626 nvme0n4: ios=1990/0, merge=0/0, ticks=2182/0, in_queue=2182, util=96.46% 00:17:08.626 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.626 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:08.884 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.884 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:08.884 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:08.884 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:09.142 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:09.142 20:53:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1566378 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:09.402 nvmf hotplug test: fio failed as expected 00:17:09.402 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.661 rmmod nvme_tcp 00:17:09.661 rmmod nvme_fabrics 00:17:09.661 rmmod nvme_keyring 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1562877 ']' 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1562877 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1562877 ']' 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1562877 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1562877 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1562877' 00:17:09.661 killing process with pid 1562877 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1562877 00:17:09.661 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1562877 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.920 20:53:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.465 20:53:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:12.465 00:17:12.465 real 0m28.244s 00:17:12.465 user 2m36.064s 00:17:12.465 sys 0m8.961s 00:17:12.465 20:53:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.465 20:53:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.465 ************************************ 00:17:12.465 END TEST nvmf_fio_target 00:17:12.465 ************************************ 00:17:12.465 20:53:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:12.465 20:53:15 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:12.465 20:53:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:12.465 20:53:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.465 20:53:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:12.465 ************************************ 00:17:12.465 START TEST nvmf_bdevio 00:17:12.465 ************************************ 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:12.465 * Looking for test storage... 00:17:12.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:12.465 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:12.466 20:53:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:19.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:19.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:19.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:19.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.115 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.116 20:53:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:17:19.377 00:17:19.377 --- 10.0.0.2 ping statistics --- 00:17:19.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.377 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:17:19.377 00:17:19.377 --- 10.0.0.1 ping statistics --- 00:17:19.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.377 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1571588 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1571588 00:17:19.377 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1571588 ']' 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.378 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:19.378 [2024-07-15 20:53:23.120551] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:17:19.378 [2024-07-15 20:53:23.120600] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.378 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.378 [2024-07-15 20:53:23.203846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.378 [2024-07-15 20:53:23.268675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.378 [2024-07-15 20:53:23.268712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.378 [2024-07-15 20:53:23.268719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.378 [2024-07-15 20:53:23.268726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.378 [2024-07-15 20:53:23.268732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.378 [2024-07-15 20:53:23.268874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:19.378 [2024-07-15 20:53:23.268998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:19.378 [2024-07-15 20:53:23.269040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.378 [2024-07-15 20:53:23.269040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.313 20:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 [2024-07-15 20:53:24.000313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 Malloc0 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 [2024-07-15 20:53:24.065931] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.313 { 00:17:20.313 "params": { 00:17:20.313 "name": "Nvme$subsystem", 00:17:20.313 "trtype": "$TEST_TRANSPORT", 00:17:20.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.313 "adrfam": "ipv4", 00:17:20.313 "trsvcid": "$NVMF_PORT", 00:17:20.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.313 "hdgst": ${hdgst:-false}, 00:17:20.313 "ddgst": ${ddgst:-false} 00:17:20.313 }, 00:17:20.313 "method": "bdev_nvme_attach_controller" 00:17:20.313 } 00:17:20.313 EOF 00:17:20.313 )") 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:20.313 20:53:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.313 "params": { 00:17:20.313 "name": "Nvme1", 00:17:20.313 "trtype": "tcp", 00:17:20.313 "traddr": "10.0.0.2", 00:17:20.313 "adrfam": "ipv4", 00:17:20.313 "trsvcid": "4420", 00:17:20.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.313 "hdgst": false, 00:17:20.313 "ddgst": false 00:17:20.313 }, 00:17:20.313 "method": "bdev_nvme_attach_controller" 00:17:20.313 }' 00:17:20.313 [2024-07-15 20:53:24.123813] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:17:20.313 [2024-07-15 20:53:24.123876] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571864 ] 00:17:20.313 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.313 [2024-07-15 20:53:24.188489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.572 [2024-07-15 20:53:24.264202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.572 [2024-07-15 20:53:24.264360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.572 [2024-07-15 20:53:24.264363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.572 I/O targets: 00:17:20.572 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:20.572 00:17:20.572 00:17:20.572 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.572 http://cunit.sourceforge.net/ 00:17:20.572 00:17:20.572 00:17:20.572 Suite: bdevio tests on: Nvme1n1 00:17:20.829 Test: blockdev write read block ...passed 00:17:20.829 Test: blockdev write zeroes read block ...passed 00:17:20.830 Test: blockdev write zeroes read no split ...passed 00:17:20.830 Test: blockdev write zeroes read split ...passed 00:17:20.830 Test: blockdev write zeroes read split partial ...passed 00:17:20.830 Test: blockdev reset ...[2024-07-15 20:53:24.669558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.830 [2024-07-15 20:53:24.669626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427ce0 (9): Bad file descriptor 00:17:20.830 [2024-07-15 20:53:24.686134] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:20.830 passed 00:17:20.830 Test: blockdev write read 8 blocks ...passed 00:17:20.830 Test: blockdev write read size > 128k ...passed 00:17:20.830 Test: blockdev write read invalid size ...passed 00:17:21.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.088 Test: blockdev write read max offset ...passed 00:17:21.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.088 Test: blockdev writev readv 8 blocks ...passed 00:17:21.088 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.088 Test: blockdev writev readv block ...passed 00:17:21.088 Test: blockdev writev readv size > 128k ...passed 00:17:21.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.088 Test: blockdev comparev and writev ...[2024-07-15 20:53:24.912649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.912674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.912685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.912691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.913272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.913281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.913286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.913895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.913910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.913915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.914509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.914516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.088 [2024-07-15 20:53:24.914525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:21.088 [2024-07-15 20:53:24.914530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.088 passed 00:17:21.347 Test: blockdev nvme passthru rw ...passed 00:17:21.347 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:53:25.000175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.347 [2024-07-15 20:53:25.000185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.347 [2024-07-15 20:53:25.000669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.347 [2024-07-15 20:53:25.000675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.347 [2024-07-15 20:53:25.001188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.347 [2024-07-15 20:53:25.001195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.347 [2024-07-15 20:53:25.001666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.348 [2024-07-15 20:53:25.001677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.348 passed 00:17:21.348 Test: blockdev nvme admin passthru ...passed 00:17:21.348 Test: blockdev copy ...passed 00:17:21.348 00:17:21.348 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.348 suites 1 1 n/a 0 0 00:17:21.348 tests 23 23 23 0 0 00:17:21.348 asserts 152 152 152 0 n/a 00:17:21.348 00:17:21.348 Elapsed time = 1.229 seconds 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.348 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.348 rmmod nvme_tcp 00:17:21.348 rmmod nvme_fabrics 00:17:21.348 rmmod nvme_keyring 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1571588 ']' 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1571588 ']' 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571588' 00:17:21.608 killing process with pid 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1571588 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.608 20:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.609 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.609 20:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.148 20:53:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.148 00:17:24.148 real 0m11.725s 00:17:24.148 user 0m12.950s 00:17:24.148 sys 0m5.843s 00:17:24.148 20:53:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.148 20:53:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:24.148 ************************************ 00:17:24.148 END TEST nvmf_bdevio 00:17:24.148 ************************************ 00:17:24.148 20:53:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:24.148 20:53:27 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:24.148 20:53:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.148 20:53:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.148 20:53:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.148 ************************************ 00:17:24.148 START TEST nvmf_auth_target 00:17:24.148 ************************************ 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:24.148 * Looking for test storage... 00:17:24.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.148 20:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:30.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:30.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:30.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:30.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.730 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.731 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:17:30.990 00:17:30.990 --- 10.0.0.2 ping statistics --- 00:17:30.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.990 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:17:30.990 00:17:30.990 --- 10.0.0.1 ping statistics --- 00:17:30.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.990 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1576268 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1576268 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1576268 ']' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.990 20:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1576306 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28322e23407815470e769ceafac82aed061c9f26c8b28c95 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Am8 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28322e23407815470e769ceafac82aed061c9f26c8b28c95 0 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28322e23407815470e769ceafac82aed061c9f26c8b28c95 0 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28322e23407815470e769ceafac82aed061c9f26c8b28c95 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Am8 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Am8 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Am8 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9582e831122eb8382c35948513ce9cb7b0dc25d20e139f563a419f961b00b86b 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.r8o 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9582e831122eb8382c35948513ce9cb7b0dc25d20e139f563a419f961b00b86b 3 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9582e831122eb8382c35948513ce9cb7b0dc25d20e139f563a419f961b00b86b 3 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9582e831122eb8382c35948513ce9cb7b0dc25d20e139f563a419f961b00b86b 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:31.931 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.r8o 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.r8o 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.r8o 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=963b210337f26afd9fce1a5d6178589e 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Vzc 00:17:32.190 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 963b210337f26afd9fce1a5d6178589e 1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 963b210337f26afd9fce1a5d6178589e 1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=963b210337f26afd9fce1a5d6178589e 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Vzc 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Vzc 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Vzc 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d357def22a9f6e5a7899c1bdc19bb1de2d7406cffc9908fe 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8in 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d357def22a9f6e5a7899c1bdc19bb1de2d7406cffc9908fe 2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d357def22a9f6e5a7899c1bdc19bb1de2d7406cffc9908fe 2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d357def22a9f6e5a7899c1bdc19bb1de2d7406cffc9908fe 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8in 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8in 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.8in 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e7c6b12afdfe22efe0489c8f1ac9b1bbeb345fb0f97e38d7 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Rbr 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e7c6b12afdfe22efe0489c8f1ac9b1bbeb345fb0f97e38d7 2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e7c6b12afdfe22efe0489c8f1ac9b1bbeb345fb0f97e38d7 2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e7c6b12afdfe22efe0489c8f1ac9b1bbeb345fb0f97e38d7 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:32.191 20:53:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Rbr 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Rbr 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Rbr 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b7aaa667f611e602f79436664225644d 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MN6 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b7aaa667f611e602f79436664225644d 1 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b7aaa667f611e602f79436664225644d 1 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b7aaa667f611e602f79436664225644d 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:32.191 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MN6 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MN6 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MN6 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa75f6e396258f3cba13cc181f3fd6788ce5e484e0f4cbdc07bf50453e34a660 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yOz 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa75f6e396258f3cba13cc181f3fd6788ce5e484e0f4cbdc07bf50453e34a660 3 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa75f6e396258f3cba13cc181f3fd6788ce5e484e0f4cbdc07bf50453e34a660 3 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa75f6e396258f3cba13cc181f3fd6788ce5e484e0f4cbdc07bf50453e34a660 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yOz 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yOz 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.yOz 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1576268 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1576268 ']' 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1576306 /var/tmp/host.sock 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1576306 ']' 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:32.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.451 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Am8 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Am8 00:17:32.710 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Am8 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.r8o ]] 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.r8o 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.r8o 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.r8o 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Vzc 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Vzc 00:17:32.969 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Vzc 00:17:33.227 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.8in ]] 00:17:33.227 20:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8in 00:17:33.228 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.228 20:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.228 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8in 00:17:33.228 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8in 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Rbr 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Rbr 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Rbr 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MN6 ]] 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN6 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN6 00:17:33.486 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MN6 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yOz 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yOz 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yOz 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.744 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.003 20:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.261 00:17:34.261 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.261 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.261 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.520 { 00:17:34.520 "cntlid": 1, 00:17:34.520 "qid": 0, 00:17:34.520 "state": "enabled", 00:17:34.520 "thread": "nvmf_tgt_poll_group_000", 00:17:34.520 "listen_address": { 00:17:34.520 "trtype": "TCP", 00:17:34.520 "adrfam": "IPv4", 00:17:34.520 "traddr": "10.0.0.2", 00:17:34.520 "trsvcid": "4420" 00:17:34.520 }, 00:17:34.520 "peer_address": { 00:17:34.520 "trtype": "TCP", 00:17:34.520 "adrfam": "IPv4", 00:17:34.520 "traddr": "10.0.0.1", 00:17:34.520 "trsvcid": "55342" 00:17:34.520 }, 00:17:34.520 "auth": { 00:17:34.520 "state": "completed", 00:17:34.520 "digest": "sha256", 00:17:34.520 "dhgroup": "null" 00:17:34.520 } 00:17:34.520 } 00:17:34.520 ]' 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.520 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.779 20:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.714 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.973 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.973 { 00:17:35.973 "cntlid": 3, 00:17:35.973 "qid": 0, 00:17:35.973 "state": "enabled", 00:17:35.973 "thread": "nvmf_tgt_poll_group_000", 00:17:35.973 "listen_address": { 00:17:35.973 "trtype": "TCP", 00:17:35.973 "adrfam": "IPv4", 00:17:35.973 "traddr": "10.0.0.2", 00:17:35.973 "trsvcid": "4420" 00:17:35.973 }, 00:17:35.973 "peer_address": { 00:17:35.973 "trtype": "TCP", 00:17:35.973 "adrfam": "IPv4", 00:17:35.973 "traddr": "10.0.0.1", 00:17:35.973 "trsvcid": "55366" 00:17:35.973 }, 00:17:35.973 "auth": { 00:17:35.973 "state": "completed", 00:17:35.973 "digest": "sha256", 00:17:35.973 "dhgroup": "null" 00:17:35.973 } 00:17:35.973 } 00:17:35.973 ]' 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.973 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.231 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.231 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.231 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.231 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.231 20:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.231 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.205 20:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.464 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.723 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.723 { 00:17:37.723 "cntlid": 5, 00:17:37.723 "qid": 0, 00:17:37.723 "state": "enabled", 00:17:37.723 "thread": "nvmf_tgt_poll_group_000", 00:17:37.723 "listen_address": { 00:17:37.723 "trtype": "TCP", 00:17:37.723 "adrfam": "IPv4", 00:17:37.723 "traddr": "10.0.0.2", 00:17:37.723 "trsvcid": "4420" 00:17:37.723 }, 00:17:37.723 "peer_address": { 00:17:37.723 "trtype": "TCP", 00:17:37.723 "adrfam": "IPv4", 00:17:37.723 "traddr": "10.0.0.1", 00:17:37.723 "trsvcid": "55400" 00:17:37.723 }, 00:17:37.723 "auth": { 00:17:37.723 "state": "completed", 00:17:37.723 "digest": "sha256", 00:17:37.723 "dhgroup": "null" 00:17:37.723 } 00:17:37.723 } 00:17:37.723 ]' 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.723 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.981 20:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.915 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.173 00:17:39.173 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.173 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.173 20:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.431 { 00:17:39.431 "cntlid": 7, 00:17:39.431 "qid": 0, 00:17:39.431 "state": "enabled", 00:17:39.431 "thread": "nvmf_tgt_poll_group_000", 00:17:39.431 "listen_address": { 00:17:39.431 "trtype": "TCP", 00:17:39.431 "adrfam": "IPv4", 00:17:39.431 "traddr": "10.0.0.2", 00:17:39.431 "trsvcid": "4420" 00:17:39.431 }, 00:17:39.431 "peer_address": { 00:17:39.431 "trtype": "TCP", 00:17:39.431 "adrfam": "IPv4", 00:17:39.431 "traddr": "10.0.0.1", 00:17:39.431 "trsvcid": "55430" 00:17:39.431 }, 00:17:39.431 "auth": { 00:17:39.431 "state": "completed", 00:17:39.431 "digest": "sha256", 00:17:39.431 "dhgroup": "null" 00:17:39.431 } 00:17:39.431 } 00:17:39.431 ]' 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.431 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.689 20:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.624 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.883 00:17:40.883 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.883 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.883 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.141 { 00:17:41.141 "cntlid": 9, 00:17:41.141 "qid": 0, 00:17:41.141 "state": "enabled", 00:17:41.141 "thread": "nvmf_tgt_poll_group_000", 00:17:41.141 "listen_address": { 00:17:41.141 "trtype": "TCP", 00:17:41.141 "adrfam": "IPv4", 00:17:41.141 "traddr": "10.0.0.2", 00:17:41.141 "trsvcid": "4420" 00:17:41.141 }, 00:17:41.141 "peer_address": { 00:17:41.141 "trtype": "TCP", 00:17:41.141 "adrfam": "IPv4", 00:17:41.141 "traddr": "10.0.0.1", 00:17:41.141 "trsvcid": "55464" 00:17:41.141 }, 00:17:41.141 "auth": { 00:17:41.141 "state": "completed", 00:17:41.141 "digest": "sha256", 00:17:41.141 "dhgroup": "ffdhe2048" 00:17:41.141 } 00:17:41.141 } 00:17:41.141 ]' 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.141 20:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.400 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:17:41.966 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.224 20:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.224 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.482 00:17:42.482 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.482 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.482 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.740 { 00:17:42.740 "cntlid": 11, 00:17:42.740 "qid": 0, 00:17:42.740 "state": "enabled", 00:17:42.740 "thread": "nvmf_tgt_poll_group_000", 00:17:42.740 "listen_address": { 00:17:42.740 "trtype": "TCP", 00:17:42.740 "adrfam": "IPv4", 00:17:42.740 "traddr": "10.0.0.2", 00:17:42.740 "trsvcid": "4420" 00:17:42.740 }, 00:17:42.740 "peer_address": { 00:17:42.740 "trtype": "TCP", 00:17:42.740 "adrfam": "IPv4", 00:17:42.740 "traddr": "10.0.0.1", 00:17:42.740 "trsvcid": "55484" 00:17:42.740 }, 00:17:42.740 "auth": { 00:17:42.740 "state": "completed", 00:17:42.740 "digest": "sha256", 00:17:42.740 "dhgroup": "ffdhe2048" 00:17:42.740 } 00:17:42.740 } 00:17:42.740 ]' 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.740 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.998 20:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.934 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.193 00:17:44.193 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.193 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.193 20:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.193 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.193 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.193 20:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.193 20:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.452 { 00:17:44.452 "cntlid": 13, 00:17:44.452 "qid": 0, 00:17:44.452 "state": "enabled", 00:17:44.452 "thread": "nvmf_tgt_poll_group_000", 00:17:44.452 "listen_address": { 00:17:44.452 "trtype": "TCP", 00:17:44.452 "adrfam": "IPv4", 00:17:44.452 "traddr": "10.0.0.2", 00:17:44.452 "trsvcid": "4420" 00:17:44.452 }, 00:17:44.452 "peer_address": { 00:17:44.452 "trtype": "TCP", 00:17:44.452 "adrfam": "IPv4", 00:17:44.452 "traddr": "10.0.0.1", 00:17:44.452 "trsvcid": "41926" 00:17:44.452 }, 00:17:44.452 "auth": { 00:17:44.452 "state": "completed", 00:17:44.452 "digest": "sha256", 00:17:44.452 "dhgroup": "ffdhe2048" 00:17:44.452 } 00:17:44.452 } 00:17:44.452 ]' 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.452 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.710 20:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:17:45.276 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.276 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.276 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.276 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.534 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.792 00:17:45.792 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.792 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.792 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.050 { 00:17:46.050 "cntlid": 15, 00:17:46.050 "qid": 0, 00:17:46.050 "state": "enabled", 00:17:46.050 "thread": "nvmf_tgt_poll_group_000", 00:17:46.050 "listen_address": { 00:17:46.050 "trtype": "TCP", 00:17:46.050 "adrfam": "IPv4", 00:17:46.050 "traddr": "10.0.0.2", 00:17:46.050 "trsvcid": "4420" 00:17:46.050 }, 00:17:46.050 "peer_address": { 00:17:46.050 "trtype": "TCP", 00:17:46.050 "adrfam": "IPv4", 00:17:46.050 "traddr": "10.0.0.1", 00:17:46.050 "trsvcid": "41952" 00:17:46.050 }, 00:17:46.050 "auth": { 00:17:46.050 "state": "completed", 00:17:46.050 "digest": "sha256", 00:17:46.050 "dhgroup": "ffdhe2048" 00:17:46.050 } 00:17:46.050 } 00:17:46.050 ]' 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.050 20:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.308 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.242 20:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.499 00:17:47.499 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.500 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.500 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.500 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.757 { 00:17:47.757 "cntlid": 17, 00:17:47.757 "qid": 0, 00:17:47.757 "state": "enabled", 00:17:47.757 "thread": "nvmf_tgt_poll_group_000", 00:17:47.757 "listen_address": { 00:17:47.757 "trtype": "TCP", 00:17:47.757 "adrfam": "IPv4", 00:17:47.757 "traddr": "10.0.0.2", 00:17:47.757 "trsvcid": "4420" 00:17:47.757 }, 00:17:47.757 "peer_address": { 00:17:47.757 "trtype": "TCP", 00:17:47.757 "adrfam": "IPv4", 00:17:47.757 "traddr": "10.0.0.1", 00:17:47.757 "trsvcid": "41988" 00:17:47.757 }, 00:17:47.757 "auth": { 00:17:47.757 "state": "completed", 00:17:47.757 "digest": "sha256", 00:17:47.757 "dhgroup": "ffdhe3072" 00:17:47.757 } 00:17:47.757 } 00:17:47.757 ]' 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.757 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.015 20:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.580 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.839 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.098 00:17:49.098 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.098 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.098 20:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.356 { 00:17:49.356 "cntlid": 19, 00:17:49.356 "qid": 0, 00:17:49.356 "state": "enabled", 00:17:49.356 "thread": "nvmf_tgt_poll_group_000", 00:17:49.356 "listen_address": { 00:17:49.356 "trtype": "TCP", 00:17:49.356 "adrfam": "IPv4", 00:17:49.356 "traddr": "10.0.0.2", 00:17:49.356 "trsvcid": "4420" 00:17:49.356 }, 00:17:49.356 "peer_address": { 00:17:49.356 "trtype": "TCP", 00:17:49.356 "adrfam": "IPv4", 00:17:49.356 "traddr": "10.0.0.1", 00:17:49.356 "trsvcid": "42024" 00:17:49.356 }, 00:17:49.356 "auth": { 00:17:49.356 "state": "completed", 00:17:49.356 "digest": "sha256", 00:17:49.356 "dhgroup": "ffdhe3072" 00:17:49.356 } 00:17:49.356 } 00:17:49.356 ]' 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.356 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.616 20:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.550 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.808 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.808 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.067 { 00:17:51.067 "cntlid": 21, 00:17:51.067 "qid": 0, 00:17:51.067 "state": "enabled", 00:17:51.067 "thread": "nvmf_tgt_poll_group_000", 00:17:51.067 "listen_address": { 00:17:51.067 "trtype": "TCP", 00:17:51.067 "adrfam": "IPv4", 00:17:51.067 "traddr": "10.0.0.2", 00:17:51.067 "trsvcid": "4420" 00:17:51.067 }, 00:17:51.067 "peer_address": { 00:17:51.067 "trtype": "TCP", 00:17:51.067 "adrfam": "IPv4", 00:17:51.067 "traddr": "10.0.0.1", 00:17:51.067 "trsvcid": "42044" 00:17:51.067 }, 00:17:51.067 "auth": { 00:17:51.067 "state": "completed", 00:17:51.067 "digest": "sha256", 00:17:51.067 "dhgroup": "ffdhe3072" 00:17:51.067 } 00:17:51.067 } 00:17:51.067 ]' 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.067 20:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.326 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.948 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.207 20:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.465 00:17:52.465 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.465 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.465 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.722 { 00:17:52.722 "cntlid": 23, 00:17:52.722 "qid": 0, 00:17:52.722 "state": "enabled", 00:17:52.722 "thread": "nvmf_tgt_poll_group_000", 00:17:52.722 "listen_address": { 00:17:52.722 "trtype": "TCP", 00:17:52.722 "adrfam": "IPv4", 00:17:52.722 "traddr": "10.0.0.2", 00:17:52.722 "trsvcid": "4420" 00:17:52.722 }, 00:17:52.722 "peer_address": { 00:17:52.722 "trtype": "TCP", 00:17:52.722 "adrfam": "IPv4", 00:17:52.722 "traddr": "10.0.0.1", 00:17:52.722 "trsvcid": "42078" 00:17:52.722 }, 00:17:52.722 "auth": { 00:17:52.722 "state": "completed", 00:17:52.722 "digest": "sha256", 00:17:52.722 "dhgroup": "ffdhe3072" 00:17:52.722 } 00:17:52.722 } 00:17:52.722 ]' 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.722 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.980 20:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.544 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.802 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.060 00:17:54.060 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.060 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.060 20:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.318 { 00:17:54.318 "cntlid": 25, 00:17:54.318 "qid": 0, 00:17:54.318 "state": "enabled", 00:17:54.318 "thread": "nvmf_tgt_poll_group_000", 00:17:54.318 "listen_address": { 00:17:54.318 "trtype": "TCP", 00:17:54.318 "adrfam": "IPv4", 00:17:54.318 "traddr": "10.0.0.2", 00:17:54.318 "trsvcid": "4420" 00:17:54.318 }, 00:17:54.318 "peer_address": { 00:17:54.318 "trtype": "TCP", 00:17:54.318 "adrfam": "IPv4", 00:17:54.318 "traddr": "10.0.0.1", 00:17:54.318 "trsvcid": "38208" 00:17:54.318 }, 00:17:54.318 "auth": { 00:17:54.318 "state": "completed", 00:17:54.318 "digest": "sha256", 00:17:54.318 "dhgroup": "ffdhe4096" 00:17:54.318 } 00:17:54.318 } 00:17:54.318 ]' 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.318 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.576 20:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.509 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.768 00:17:55.768 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.768 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.768 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.026 { 00:17:56.026 "cntlid": 27, 00:17:56.026 "qid": 0, 00:17:56.026 "state": "enabled", 00:17:56.026 "thread": "nvmf_tgt_poll_group_000", 00:17:56.026 "listen_address": { 00:17:56.026 "trtype": "TCP", 00:17:56.026 "adrfam": "IPv4", 00:17:56.026 "traddr": "10.0.0.2", 00:17:56.026 "trsvcid": "4420" 00:17:56.026 }, 00:17:56.026 "peer_address": { 00:17:56.026 "trtype": "TCP", 00:17:56.026 "adrfam": "IPv4", 00:17:56.026 "traddr": "10.0.0.1", 00:17:56.026 "trsvcid": "38224" 00:17:56.026 }, 00:17:56.026 "auth": { 00:17:56.026 "state": "completed", 00:17:56.026 "digest": "sha256", 00:17:56.026 "dhgroup": "ffdhe4096" 00:17:56.026 } 00:17:56.026 } 00:17:56.026 ]' 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.026 20:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.283 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.215 20:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.472 00:17:57.472 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.472 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.472 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.729 { 00:17:57.729 "cntlid": 29, 00:17:57.729 "qid": 0, 00:17:57.729 "state": "enabled", 00:17:57.729 "thread": "nvmf_tgt_poll_group_000", 00:17:57.729 "listen_address": { 00:17:57.729 "trtype": "TCP", 00:17:57.729 "adrfam": "IPv4", 00:17:57.729 "traddr": "10.0.0.2", 00:17:57.729 "trsvcid": "4420" 00:17:57.729 }, 00:17:57.729 "peer_address": { 00:17:57.729 "trtype": "TCP", 00:17:57.729 "adrfam": "IPv4", 00:17:57.729 "traddr": "10.0.0.1", 00:17:57.729 "trsvcid": "38254" 00:17:57.729 }, 00:17:57.729 "auth": { 00:17:57.729 "state": "completed", 00:17:57.729 "digest": "sha256", 00:17:57.729 "dhgroup": "ffdhe4096" 00:17:57.729 } 00:17:57.729 } 00:17:57.729 ]' 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.729 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.987 20:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.919 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.176 00:17:59.176 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.176 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.176 20:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.434 { 00:17:59.434 "cntlid": 31, 00:17:59.434 "qid": 0, 00:17:59.434 "state": "enabled", 00:17:59.434 "thread": "nvmf_tgt_poll_group_000", 00:17:59.434 "listen_address": { 00:17:59.434 "trtype": "TCP", 00:17:59.434 "adrfam": "IPv4", 00:17:59.434 "traddr": "10.0.0.2", 00:17:59.434 "trsvcid": "4420" 00:17:59.434 }, 00:17:59.434 "peer_address": { 00:17:59.434 "trtype": "TCP", 00:17:59.434 "adrfam": "IPv4", 00:17:59.434 "traddr": "10.0.0.1", 00:17:59.434 "trsvcid": "38284" 00:17:59.434 }, 00:17:59.434 "auth": { 00:17:59.434 "state": "completed", 00:17:59.434 "digest": "sha256", 00:17:59.434 "dhgroup": "ffdhe4096" 00:17:59.434 } 00:17:59.434 } 00:17:59.434 ]' 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.434 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.697 20:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.261 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.518 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.777 00:18:00.777 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.777 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.777 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.034 { 00:18:01.034 "cntlid": 33, 00:18:01.034 "qid": 0, 00:18:01.034 "state": "enabled", 00:18:01.034 "thread": "nvmf_tgt_poll_group_000", 00:18:01.034 "listen_address": { 00:18:01.034 "trtype": "TCP", 00:18:01.034 "adrfam": "IPv4", 00:18:01.034 "traddr": "10.0.0.2", 00:18:01.034 "trsvcid": "4420" 00:18:01.034 }, 00:18:01.034 "peer_address": { 00:18:01.034 "trtype": "TCP", 00:18:01.034 "adrfam": "IPv4", 00:18:01.034 "traddr": "10.0.0.1", 00:18:01.034 "trsvcid": "38314" 00:18:01.034 }, 00:18:01.034 "auth": { 00:18:01.034 "state": "completed", 00:18:01.034 "digest": "sha256", 00:18:01.034 "dhgroup": "ffdhe6144" 00:18:01.034 } 00:18:01.034 } 00:18:01.034 ]' 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.034 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.291 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.291 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.291 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.291 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.291 20:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.291 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.226 20:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.226 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.818 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.818 { 00:18:02.818 "cntlid": 35, 00:18:02.818 "qid": 0, 00:18:02.818 "state": "enabled", 00:18:02.818 "thread": "nvmf_tgt_poll_group_000", 00:18:02.818 "listen_address": { 00:18:02.818 "trtype": "TCP", 00:18:02.818 "adrfam": "IPv4", 00:18:02.818 "traddr": "10.0.0.2", 00:18:02.818 "trsvcid": "4420" 00:18:02.818 }, 00:18:02.818 "peer_address": { 00:18:02.818 "trtype": "TCP", 00:18:02.818 "adrfam": "IPv4", 00:18:02.818 "traddr": "10.0.0.1", 00:18:02.818 "trsvcid": "38340" 00:18:02.818 }, 00:18:02.818 "auth": { 00:18:02.818 "state": "completed", 00:18:02.818 "digest": "sha256", 00:18:02.818 "dhgroup": "ffdhe6144" 00:18:02.818 } 00:18:02.818 } 00:18:02.818 ]' 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.818 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.076 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.076 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.076 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.076 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.076 20:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.008 20:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.574 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.574 { 00:18:04.574 "cntlid": 37, 00:18:04.574 "qid": 0, 00:18:04.574 "state": "enabled", 00:18:04.574 "thread": "nvmf_tgt_poll_group_000", 00:18:04.574 "listen_address": { 00:18:04.574 "trtype": "TCP", 00:18:04.574 "adrfam": "IPv4", 00:18:04.574 "traddr": "10.0.0.2", 00:18:04.574 "trsvcid": "4420" 00:18:04.574 }, 00:18:04.574 "peer_address": { 00:18:04.574 "trtype": "TCP", 00:18:04.574 "adrfam": "IPv4", 00:18:04.574 "traddr": "10.0.0.1", 00:18:04.574 "trsvcid": "59148" 00:18:04.574 }, 00:18:04.574 "auth": { 00:18:04.574 "state": "completed", 00:18:04.574 "digest": "sha256", 00:18:04.574 "dhgroup": "ffdhe6144" 00:18:04.574 } 00:18:04.574 } 00:18:04.574 ]' 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.574 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.832 20:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.766 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.024 20:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.024 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.024 20:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.282 00:18:06.282 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.282 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.282 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.577 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.577 { 00:18:06.577 "cntlid": 39, 00:18:06.577 "qid": 0, 00:18:06.577 "state": "enabled", 00:18:06.578 "thread": "nvmf_tgt_poll_group_000", 00:18:06.578 "listen_address": { 00:18:06.578 "trtype": "TCP", 00:18:06.578 "adrfam": "IPv4", 00:18:06.578 "traddr": "10.0.0.2", 00:18:06.578 "trsvcid": "4420" 00:18:06.578 }, 00:18:06.578 "peer_address": { 00:18:06.578 "trtype": "TCP", 00:18:06.578 "adrfam": "IPv4", 00:18:06.578 "traddr": "10.0.0.1", 00:18:06.578 "trsvcid": "59162" 00:18:06.578 }, 00:18:06.578 "auth": { 00:18:06.578 "state": "completed", 00:18:06.578 "digest": "sha256", 00:18:06.578 "dhgroup": "ffdhe6144" 00:18:06.578 } 00:18:06.578 } 00:18:06.578 ]' 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.578 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.865 20:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.434 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.695 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.266 00:18:08.266 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.266 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.266 20:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.266 { 00:18:08.266 "cntlid": 41, 00:18:08.266 "qid": 0, 00:18:08.266 "state": "enabled", 00:18:08.266 "thread": "nvmf_tgt_poll_group_000", 00:18:08.266 "listen_address": { 00:18:08.266 "trtype": "TCP", 00:18:08.266 "adrfam": "IPv4", 00:18:08.266 "traddr": "10.0.0.2", 00:18:08.266 "trsvcid": "4420" 00:18:08.266 }, 00:18:08.266 "peer_address": { 00:18:08.266 "trtype": "TCP", 00:18:08.266 "adrfam": "IPv4", 00:18:08.266 "traddr": "10.0.0.1", 00:18:08.266 "trsvcid": "59190" 00:18:08.266 }, 00:18:08.266 "auth": { 00:18:08.266 "state": "completed", 00:18:08.266 "digest": "sha256", 00:18:08.266 "dhgroup": "ffdhe8192" 00:18:08.266 } 00:18:08.266 } 00:18:08.266 ]' 00:18:08.266 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.528 20:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.469 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.035 00:18:10.035 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.035 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.035 20:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.294 { 00:18:10.294 "cntlid": 43, 00:18:10.294 "qid": 0, 00:18:10.294 "state": "enabled", 00:18:10.294 "thread": "nvmf_tgt_poll_group_000", 00:18:10.294 "listen_address": { 00:18:10.294 "trtype": "TCP", 00:18:10.294 "adrfam": "IPv4", 00:18:10.294 "traddr": "10.0.0.2", 00:18:10.294 "trsvcid": "4420" 00:18:10.294 }, 00:18:10.294 "peer_address": { 00:18:10.294 "trtype": "TCP", 00:18:10.294 "adrfam": "IPv4", 00:18:10.294 "traddr": "10.0.0.1", 00:18:10.294 "trsvcid": "59218" 00:18:10.294 }, 00:18:10.294 "auth": { 00:18:10.294 "state": "completed", 00:18:10.294 "digest": "sha256", 00:18:10.294 "dhgroup": "ffdhe8192" 00:18:10.294 } 00:18:10.294 } 00:18:10.294 ]' 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.294 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.552 20:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.545 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.546 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.546 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.546 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.546 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.112 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.112 { 00:18:12.112 "cntlid": 45, 00:18:12.112 "qid": 0, 00:18:12.112 "state": "enabled", 00:18:12.112 "thread": "nvmf_tgt_poll_group_000", 00:18:12.112 "listen_address": { 00:18:12.112 "trtype": "TCP", 00:18:12.112 "adrfam": "IPv4", 00:18:12.112 "traddr": "10.0.0.2", 00:18:12.112 "trsvcid": "4420" 00:18:12.112 }, 00:18:12.112 "peer_address": { 00:18:12.112 "trtype": "TCP", 00:18:12.112 "adrfam": "IPv4", 00:18:12.112 "traddr": "10.0.0.1", 00:18:12.112 "trsvcid": "59244" 00:18:12.112 }, 00:18:12.112 "auth": { 00:18:12.112 "state": "completed", 00:18:12.112 "digest": "sha256", 00:18:12.112 "dhgroup": "ffdhe8192" 00:18:12.112 } 00:18:12.112 } 00:18:12.112 ]' 00:18:12.112 20:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.371 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.628 20:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.195 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.455 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.023 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.023 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.283 20:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.283 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.283 { 00:18:14.283 "cntlid": 47, 00:18:14.283 "qid": 0, 00:18:14.283 "state": "enabled", 00:18:14.283 "thread": "nvmf_tgt_poll_group_000", 00:18:14.283 "listen_address": { 00:18:14.283 "trtype": "TCP", 00:18:14.283 "adrfam": "IPv4", 00:18:14.283 "traddr": "10.0.0.2", 00:18:14.283 "trsvcid": "4420" 00:18:14.283 }, 00:18:14.283 "peer_address": { 00:18:14.283 "trtype": "TCP", 00:18:14.283 "adrfam": "IPv4", 00:18:14.283 "traddr": "10.0.0.1", 00:18:14.283 "trsvcid": "59266" 00:18:14.283 }, 00:18:14.283 "auth": { 00:18:14.283 "state": "completed", 00:18:14.283 "digest": "sha256", 00:18:14.283 "dhgroup": "ffdhe8192" 00:18:14.283 } 00:18:14.283 } 00:18:14.283 ]' 00:18:14.283 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.283 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.283 20:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.283 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.283 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.283 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.283 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.283 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.542 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.113 20:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.372 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.633 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.633 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.633 { 00:18:15.633 "cntlid": 49, 00:18:15.633 "qid": 0, 00:18:15.633 "state": "enabled", 00:18:15.633 "thread": "nvmf_tgt_poll_group_000", 00:18:15.633 "listen_address": { 00:18:15.633 "trtype": "TCP", 00:18:15.633 "adrfam": "IPv4", 00:18:15.633 "traddr": "10.0.0.2", 00:18:15.633 "trsvcid": "4420" 00:18:15.633 }, 00:18:15.633 "peer_address": { 00:18:15.633 "trtype": "TCP", 00:18:15.633 "adrfam": "IPv4", 00:18:15.633 "traddr": "10.0.0.1", 00:18:15.633 "trsvcid": "36762" 00:18:15.633 }, 00:18:15.633 "auth": { 00:18:15.633 "state": "completed", 00:18:15.633 "digest": "sha384", 00:18:15.633 "dhgroup": "null" 00:18:15.633 } 00:18:15.633 } 00:18:15.633 ]' 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.892 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.151 20:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.718 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.977 20:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.236 00:18:17.236 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.236 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.236 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.494 { 00:18:17.494 "cntlid": 51, 00:18:17.494 "qid": 0, 00:18:17.494 "state": "enabled", 00:18:17.494 "thread": "nvmf_tgt_poll_group_000", 00:18:17.494 "listen_address": { 00:18:17.494 "trtype": "TCP", 00:18:17.494 "adrfam": "IPv4", 00:18:17.494 "traddr": "10.0.0.2", 00:18:17.494 "trsvcid": "4420" 00:18:17.494 }, 00:18:17.494 "peer_address": { 00:18:17.494 "trtype": "TCP", 00:18:17.494 "adrfam": "IPv4", 00:18:17.494 "traddr": "10.0.0.1", 00:18:17.494 "trsvcid": "36788" 00:18:17.494 }, 00:18:17.494 "auth": { 00:18:17.494 "state": "completed", 00:18:17.494 "digest": "sha384", 00:18:17.494 "dhgroup": "null" 00:18:17.494 } 00:18:17.494 } 00:18:17.494 ]' 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.494 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.752 20:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.685 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.943 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.943 { 00:18:18.943 "cntlid": 53, 00:18:18.943 "qid": 0, 00:18:18.943 "state": "enabled", 00:18:18.943 "thread": "nvmf_tgt_poll_group_000", 00:18:18.943 "listen_address": { 00:18:18.943 "trtype": "TCP", 00:18:18.943 "adrfam": "IPv4", 00:18:18.943 "traddr": "10.0.0.2", 00:18:18.943 "trsvcid": "4420" 00:18:18.943 }, 00:18:18.943 "peer_address": { 00:18:18.943 "trtype": "TCP", 00:18:18.943 "adrfam": "IPv4", 00:18:18.943 "traddr": "10.0.0.1", 00:18:18.943 "trsvcid": "36816" 00:18:18.943 }, 00:18:18.943 "auth": { 00:18:18.943 "state": "completed", 00:18:18.943 "digest": "sha384", 00:18:18.943 "dhgroup": "null" 00:18:18.943 } 00:18:18.943 } 00:18:18.943 ]' 00:18:18.943 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.201 20:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.459 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.022 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:20.281 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:20.281 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.281 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.281 20:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.281 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.539 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.539 { 00:18:20.539 "cntlid": 55, 00:18:20.539 "qid": 0, 00:18:20.539 "state": "enabled", 00:18:20.539 "thread": "nvmf_tgt_poll_group_000", 00:18:20.539 "listen_address": { 00:18:20.539 "trtype": "TCP", 00:18:20.539 "adrfam": "IPv4", 00:18:20.539 "traddr": "10.0.0.2", 00:18:20.539 "trsvcid": "4420" 00:18:20.539 }, 00:18:20.539 "peer_address": { 00:18:20.539 "trtype": "TCP", 00:18:20.539 "adrfam": "IPv4", 00:18:20.539 "traddr": "10.0.0.1", 00:18:20.539 "trsvcid": "36854" 00:18:20.539 }, 00:18:20.539 "auth": { 00:18:20.539 "state": "completed", 00:18:20.539 "digest": "sha384", 00:18:20.539 "dhgroup": "null" 00:18:20.539 } 00:18:20.539 } 00:18:20.539 ]' 00:18:20.539 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.797 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.077 20:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.651 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.908 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.909 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.167 00:18:22.167 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.167 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.167 20:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.167 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.167 { 00:18:22.167 "cntlid": 57, 00:18:22.167 "qid": 0, 00:18:22.167 "state": "enabled", 00:18:22.167 "thread": "nvmf_tgt_poll_group_000", 00:18:22.167 "listen_address": { 00:18:22.167 "trtype": "TCP", 00:18:22.167 "adrfam": "IPv4", 00:18:22.167 "traddr": "10.0.0.2", 00:18:22.167 "trsvcid": "4420" 00:18:22.167 }, 00:18:22.167 "peer_address": { 00:18:22.168 "trtype": "TCP", 00:18:22.168 "adrfam": "IPv4", 00:18:22.168 "traddr": "10.0.0.1", 00:18:22.168 "trsvcid": "36896" 00:18:22.168 }, 00:18:22.168 "auth": { 00:18:22.168 "state": "completed", 00:18:22.168 "digest": "sha384", 00:18:22.168 "dhgroup": "ffdhe2048" 00:18:22.168 } 00:18:22.168 } 00:18:22.168 ]' 00:18:22.168 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.426 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.685 20:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.250 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.508 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.766 00:18:23.766 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.766 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.766 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.025 { 00:18:24.025 "cntlid": 59, 00:18:24.025 "qid": 0, 00:18:24.025 "state": "enabled", 00:18:24.025 "thread": "nvmf_tgt_poll_group_000", 00:18:24.025 "listen_address": { 00:18:24.025 "trtype": "TCP", 00:18:24.025 "adrfam": "IPv4", 00:18:24.025 "traddr": "10.0.0.2", 00:18:24.025 "trsvcid": "4420" 00:18:24.025 }, 00:18:24.025 "peer_address": { 00:18:24.025 "trtype": "TCP", 00:18:24.025 "adrfam": "IPv4", 00:18:24.025 "traddr": "10.0.0.1", 00:18:24.025 "trsvcid": "48270" 00:18:24.025 }, 00:18:24.025 "auth": { 00:18:24.025 "state": "completed", 00:18:24.025 "digest": "sha384", 00:18:24.025 "dhgroup": "ffdhe2048" 00:18:24.025 } 00:18:24.025 } 00:18:24.025 ]' 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.025 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.283 20:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:24.850 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.109 20:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.367 00:18:25.367 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.367 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.367 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.625 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.625 { 00:18:25.625 "cntlid": 61, 00:18:25.625 "qid": 0, 00:18:25.625 "state": "enabled", 00:18:25.625 "thread": "nvmf_tgt_poll_group_000", 00:18:25.625 "listen_address": { 00:18:25.625 "trtype": "TCP", 00:18:25.625 "adrfam": "IPv4", 00:18:25.625 "traddr": "10.0.0.2", 00:18:25.625 "trsvcid": "4420" 00:18:25.625 }, 00:18:25.625 "peer_address": { 00:18:25.626 "trtype": "TCP", 00:18:25.626 "adrfam": "IPv4", 00:18:25.626 "traddr": "10.0.0.1", 00:18:25.626 "trsvcid": "48298" 00:18:25.626 }, 00:18:25.626 "auth": { 00:18:25.626 "state": "completed", 00:18:25.626 "digest": "sha384", 00:18:25.626 "dhgroup": "ffdhe2048" 00:18:25.626 } 00:18:25.626 } 00:18:25.626 ]' 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.626 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.884 20:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.835 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.836 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.836 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.095 00:18:27.095 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.095 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.095 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.353 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.353 20:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.353 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.353 20:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.353 { 00:18:27.353 "cntlid": 63, 00:18:27.353 "qid": 0, 00:18:27.353 "state": "enabled", 00:18:27.353 "thread": "nvmf_tgt_poll_group_000", 00:18:27.353 "listen_address": { 00:18:27.353 "trtype": "TCP", 00:18:27.353 "adrfam": "IPv4", 00:18:27.353 "traddr": "10.0.0.2", 00:18:27.353 "trsvcid": "4420" 00:18:27.353 }, 00:18:27.353 "peer_address": { 00:18:27.353 "trtype": "TCP", 00:18:27.353 "adrfam": "IPv4", 00:18:27.353 "traddr": "10.0.0.1", 00:18:27.353 "trsvcid": "48326" 00:18:27.353 }, 00:18:27.353 "auth": { 00:18:27.353 "state": "completed", 00:18:27.353 "digest": "sha384", 00:18:27.353 "dhgroup": "ffdhe2048" 00:18:27.353 } 00:18:27.353 } 00:18:27.353 ]' 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.353 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.611 20:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.176 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.435 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.693 00:18:28.693 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.693 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.693 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.951 { 00:18:28.951 "cntlid": 65, 00:18:28.951 "qid": 0, 00:18:28.951 "state": "enabled", 00:18:28.951 "thread": "nvmf_tgt_poll_group_000", 00:18:28.951 "listen_address": { 00:18:28.951 "trtype": "TCP", 00:18:28.951 "adrfam": "IPv4", 00:18:28.951 "traddr": "10.0.0.2", 00:18:28.951 "trsvcid": "4420" 00:18:28.951 }, 00:18:28.951 "peer_address": { 00:18:28.951 "trtype": "TCP", 00:18:28.951 "adrfam": "IPv4", 00:18:28.951 "traddr": "10.0.0.1", 00:18:28.951 "trsvcid": "48358" 00:18:28.951 }, 00:18:28.951 "auth": { 00:18:28.951 "state": "completed", 00:18:28.951 "digest": "sha384", 00:18:28.951 "dhgroup": "ffdhe3072" 00:18:28.951 } 00:18:28.951 } 00:18:28.951 ]' 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.951 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.210 20:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:29.777 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.036 20:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.293 00:18:30.293 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.293 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.293 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.551 { 00:18:30.551 "cntlid": 67, 00:18:30.551 "qid": 0, 00:18:30.551 "state": "enabled", 00:18:30.551 "thread": "nvmf_tgt_poll_group_000", 00:18:30.551 "listen_address": { 00:18:30.551 "trtype": "TCP", 00:18:30.551 "adrfam": "IPv4", 00:18:30.551 "traddr": "10.0.0.2", 00:18:30.551 "trsvcid": "4420" 00:18:30.551 }, 00:18:30.551 "peer_address": { 00:18:30.551 "trtype": "TCP", 00:18:30.551 "adrfam": "IPv4", 00:18:30.551 "traddr": "10.0.0.1", 00:18:30.551 "trsvcid": "48392" 00:18:30.551 }, 00:18:30.551 "auth": { 00:18:30.551 "state": "completed", 00:18:30.551 "digest": "sha384", 00:18:30.551 "dhgroup": "ffdhe3072" 00:18:30.551 } 00:18:30.551 } 00:18:30.551 ]' 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.551 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.808 20:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.738 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.994 00:18:31.994 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.994 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.994 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.251 { 00:18:32.251 "cntlid": 69, 00:18:32.251 "qid": 0, 00:18:32.251 "state": "enabled", 00:18:32.251 "thread": "nvmf_tgt_poll_group_000", 00:18:32.251 "listen_address": { 00:18:32.251 "trtype": "TCP", 00:18:32.251 "adrfam": "IPv4", 00:18:32.251 "traddr": "10.0.0.2", 00:18:32.251 "trsvcid": "4420" 00:18:32.251 }, 00:18:32.251 "peer_address": { 00:18:32.251 "trtype": "TCP", 00:18:32.251 "adrfam": "IPv4", 00:18:32.251 "traddr": "10.0.0.1", 00:18:32.251 "trsvcid": "48424" 00:18:32.251 }, 00:18:32.251 "auth": { 00:18:32.251 "state": "completed", 00:18:32.251 "digest": "sha384", 00:18:32.251 "dhgroup": "ffdhe3072" 00:18:32.251 } 00:18:32.251 } 00:18:32.251 ]' 00:18:32.251 20:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.251 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.508 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:33.440 20:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.440 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.698 00:18:33.698 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.698 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.698 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.957 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.957 { 00:18:33.957 "cntlid": 71, 00:18:33.957 "qid": 0, 00:18:33.957 "state": "enabled", 00:18:33.957 "thread": "nvmf_tgt_poll_group_000", 00:18:33.957 "listen_address": { 00:18:33.957 "trtype": "TCP", 00:18:33.957 "adrfam": "IPv4", 00:18:33.957 "traddr": "10.0.0.2", 00:18:33.957 "trsvcid": "4420" 00:18:33.957 }, 00:18:33.957 "peer_address": { 00:18:33.957 "trtype": "TCP", 00:18:33.957 "adrfam": "IPv4", 00:18:33.957 "traddr": "10.0.0.1", 00:18:33.957 "trsvcid": "48446" 00:18:33.957 }, 00:18:33.957 "auth": { 00:18:33.957 "state": "completed", 00:18:33.957 "digest": "sha384", 00:18:33.958 "dhgroup": "ffdhe3072" 00:18:33.958 } 00:18:33.958 } 00:18:33.958 ]' 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.958 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.216 20:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:34.784 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.784 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.784 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.784 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.043 20:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.302 00:18:35.302 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.302 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.302 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.597 { 00:18:35.597 "cntlid": 73, 00:18:35.597 "qid": 0, 00:18:35.597 "state": "enabled", 00:18:35.597 "thread": "nvmf_tgt_poll_group_000", 00:18:35.597 "listen_address": { 00:18:35.597 "trtype": "TCP", 00:18:35.597 "adrfam": "IPv4", 00:18:35.597 "traddr": "10.0.0.2", 00:18:35.597 "trsvcid": "4420" 00:18:35.597 }, 00:18:35.597 "peer_address": { 00:18:35.597 "trtype": "TCP", 00:18:35.597 "adrfam": "IPv4", 00:18:35.597 "traddr": "10.0.0.1", 00:18:35.597 "trsvcid": "34396" 00:18:35.597 }, 00:18:35.597 "auth": { 00:18:35.597 "state": "completed", 00:18:35.597 "digest": "sha384", 00:18:35.597 "dhgroup": "ffdhe4096" 00:18:35.597 } 00:18:35.597 } 00:18:35.597 ]' 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.597 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.891 20:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.481 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.740 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.741 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.741 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.999 00:18:36.999 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.999 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.999 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.257 { 00:18:37.257 "cntlid": 75, 00:18:37.257 "qid": 0, 00:18:37.257 "state": "enabled", 00:18:37.257 "thread": "nvmf_tgt_poll_group_000", 00:18:37.257 "listen_address": { 00:18:37.257 "trtype": "TCP", 00:18:37.257 "adrfam": "IPv4", 00:18:37.257 "traddr": "10.0.0.2", 00:18:37.257 "trsvcid": "4420" 00:18:37.257 }, 00:18:37.257 "peer_address": { 00:18:37.257 "trtype": "TCP", 00:18:37.257 "adrfam": "IPv4", 00:18:37.257 "traddr": "10.0.0.1", 00:18:37.257 "trsvcid": "34424" 00:18:37.257 }, 00:18:37.257 "auth": { 00:18:37.257 "state": "completed", 00:18:37.257 "digest": "sha384", 00:18:37.257 "dhgroup": "ffdhe4096" 00:18:37.257 } 00:18:37.257 } 00:18:37.257 ]' 00:18:37.257 20:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.257 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.515 20:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.451 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.710 00:18:38.710 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.710 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.710 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.968 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.969 { 00:18:38.969 "cntlid": 77, 00:18:38.969 "qid": 0, 00:18:38.969 "state": "enabled", 00:18:38.969 "thread": "nvmf_tgt_poll_group_000", 00:18:38.969 "listen_address": { 00:18:38.969 "trtype": "TCP", 00:18:38.969 "adrfam": "IPv4", 00:18:38.969 "traddr": "10.0.0.2", 00:18:38.969 "trsvcid": "4420" 00:18:38.969 }, 00:18:38.969 "peer_address": { 00:18:38.969 "trtype": "TCP", 00:18:38.969 "adrfam": "IPv4", 00:18:38.969 "traddr": "10.0.0.1", 00:18:38.969 "trsvcid": "34440" 00:18:38.969 }, 00:18:38.969 "auth": { 00:18:38.969 "state": "completed", 00:18:38.969 "digest": "sha384", 00:18:38.969 "dhgroup": "ffdhe4096" 00:18:38.969 } 00:18:38.969 } 00:18:38.969 ]' 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.969 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.227 20:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.162 20:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.421 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.421 20:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.680 { 00:18:40.680 "cntlid": 79, 00:18:40.680 "qid": 0, 00:18:40.680 "state": "enabled", 00:18:40.680 "thread": "nvmf_tgt_poll_group_000", 00:18:40.680 "listen_address": { 00:18:40.680 "trtype": "TCP", 00:18:40.680 "adrfam": "IPv4", 00:18:40.680 "traddr": "10.0.0.2", 00:18:40.680 "trsvcid": "4420" 00:18:40.680 }, 00:18:40.680 "peer_address": { 00:18:40.680 "trtype": "TCP", 00:18:40.680 "adrfam": "IPv4", 00:18:40.680 "traddr": "10.0.0.1", 00:18:40.680 "trsvcid": "34472" 00:18:40.680 }, 00:18:40.680 "auth": { 00:18:40.680 "state": "completed", 00:18:40.680 "digest": "sha384", 00:18:40.680 "dhgroup": "ffdhe4096" 00:18:40.680 } 00:18:40.680 } 00:18:40.680 ]' 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.680 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.939 20:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.507 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.767 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.026 00:18:42.026 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.026 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.026 20:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.285 { 00:18:42.285 "cntlid": 81, 00:18:42.285 "qid": 0, 00:18:42.285 "state": "enabled", 00:18:42.285 "thread": "nvmf_tgt_poll_group_000", 00:18:42.285 "listen_address": { 00:18:42.285 "trtype": "TCP", 00:18:42.285 "adrfam": "IPv4", 00:18:42.285 "traddr": "10.0.0.2", 00:18:42.285 "trsvcid": "4420" 00:18:42.285 }, 00:18:42.285 "peer_address": { 00:18:42.285 "trtype": "TCP", 00:18:42.285 "adrfam": "IPv4", 00:18:42.285 "traddr": "10.0.0.1", 00:18:42.285 "trsvcid": "34498" 00:18:42.285 }, 00:18:42.285 "auth": { 00:18:42.285 "state": "completed", 00:18:42.285 "digest": "sha384", 00:18:42.285 "dhgroup": "ffdhe6144" 00:18:42.285 } 00:18:42.285 } 00:18:42.285 ]' 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.285 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.544 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.544 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.544 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.544 20:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.479 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.480 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.046 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.046 { 00:18:44.046 "cntlid": 83, 00:18:44.046 "qid": 0, 00:18:44.046 "state": "enabled", 00:18:44.046 "thread": "nvmf_tgt_poll_group_000", 00:18:44.046 "listen_address": { 00:18:44.046 "trtype": "TCP", 00:18:44.046 "adrfam": "IPv4", 00:18:44.046 "traddr": "10.0.0.2", 00:18:44.046 "trsvcid": "4420" 00:18:44.046 }, 00:18:44.046 "peer_address": { 00:18:44.046 "trtype": "TCP", 00:18:44.046 "adrfam": "IPv4", 00:18:44.046 "traddr": "10.0.0.1", 00:18:44.046 "trsvcid": "40852" 00:18:44.046 }, 00:18:44.046 "auth": { 00:18:44.046 "state": "completed", 00:18:44.046 "digest": "sha384", 00:18:44.046 "dhgroup": "ffdhe6144" 00:18:44.046 } 00:18:44.046 } 00:18:44.046 ]' 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.046 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.305 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.305 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.305 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.305 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.305 20:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.305 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:45.238 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.238 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.238 20:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.238 20:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.239 20:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.239 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.239 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.239 20:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.239 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.804 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.804 { 00:18:45.804 "cntlid": 85, 00:18:45.804 "qid": 0, 00:18:45.804 "state": "enabled", 00:18:45.804 "thread": "nvmf_tgt_poll_group_000", 00:18:45.804 "listen_address": { 00:18:45.804 "trtype": "TCP", 00:18:45.804 "adrfam": "IPv4", 00:18:45.804 "traddr": "10.0.0.2", 00:18:45.804 "trsvcid": "4420" 00:18:45.804 }, 00:18:45.804 "peer_address": { 00:18:45.804 "trtype": "TCP", 00:18:45.804 "adrfam": "IPv4", 00:18:45.804 "traddr": "10.0.0.1", 00:18:45.804 "trsvcid": "40894" 00:18:45.804 }, 00:18:45.804 "auth": { 00:18:45.804 "state": "completed", 00:18:45.804 "digest": "sha384", 00:18:45.804 "dhgroup": "ffdhe6144" 00:18:45.804 } 00:18:45.804 } 00:18:45.804 ]' 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.804 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.061 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.061 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.061 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.061 20:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.997 20:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.256 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.514 { 00:18:47.514 "cntlid": 87, 00:18:47.514 "qid": 0, 00:18:47.514 "state": "enabled", 00:18:47.514 "thread": "nvmf_tgt_poll_group_000", 00:18:47.514 "listen_address": { 00:18:47.514 "trtype": "TCP", 00:18:47.514 "adrfam": "IPv4", 00:18:47.514 "traddr": "10.0.0.2", 00:18:47.514 "trsvcid": "4420" 00:18:47.514 }, 00:18:47.514 "peer_address": { 00:18:47.514 "trtype": "TCP", 00:18:47.514 "adrfam": "IPv4", 00:18:47.514 "traddr": "10.0.0.1", 00:18:47.514 "trsvcid": "40924" 00:18:47.514 }, 00:18:47.514 "auth": { 00:18:47.514 "state": "completed", 00:18:47.514 "digest": "sha384", 00:18:47.514 "dhgroup": "ffdhe6144" 00:18:47.514 } 00:18:47.514 } 00:18:47.514 ]' 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.514 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.773 20:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.709 20:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.276 00:18:49.276 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.276 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.276 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.533 { 00:18:49.533 "cntlid": 89, 00:18:49.533 "qid": 0, 00:18:49.533 "state": "enabled", 00:18:49.533 "thread": "nvmf_tgt_poll_group_000", 00:18:49.533 "listen_address": { 00:18:49.533 "trtype": "TCP", 00:18:49.533 "adrfam": "IPv4", 00:18:49.533 "traddr": "10.0.0.2", 00:18:49.533 "trsvcid": "4420" 00:18:49.533 }, 00:18:49.533 "peer_address": { 00:18:49.533 "trtype": "TCP", 00:18:49.533 "adrfam": "IPv4", 00:18:49.533 "traddr": "10.0.0.1", 00:18:49.533 "trsvcid": "40954" 00:18:49.533 }, 00:18:49.533 "auth": { 00:18:49.533 "state": "completed", 00:18:49.533 "digest": "sha384", 00:18:49.533 "dhgroup": "ffdhe8192" 00:18:49.533 } 00:18:49.533 } 00:18:49.533 ]' 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.533 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.789 20:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.725 20:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.328 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.328 { 00:18:51.328 "cntlid": 91, 00:18:51.328 "qid": 0, 00:18:51.328 "state": "enabled", 00:18:51.328 "thread": "nvmf_tgt_poll_group_000", 00:18:51.328 "listen_address": { 00:18:51.328 "trtype": "TCP", 00:18:51.328 "adrfam": "IPv4", 00:18:51.328 "traddr": "10.0.0.2", 00:18:51.328 "trsvcid": "4420" 00:18:51.328 }, 00:18:51.328 "peer_address": { 00:18:51.328 "trtype": "TCP", 00:18:51.328 "adrfam": "IPv4", 00:18:51.328 "traddr": "10.0.0.1", 00:18:51.328 "trsvcid": "40986" 00:18:51.328 }, 00:18:51.328 "auth": { 00:18:51.328 "state": "completed", 00:18:51.328 "digest": "sha384", 00:18:51.328 "dhgroup": "ffdhe8192" 00:18:51.328 } 00:18:51.328 } 00:18:51.328 ]' 00:18:51.328 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.588 20:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.524 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.092 00:18:53.092 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.092 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.092 20:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.351 { 00:18:53.351 "cntlid": 93, 00:18:53.351 "qid": 0, 00:18:53.351 "state": "enabled", 00:18:53.351 "thread": "nvmf_tgt_poll_group_000", 00:18:53.351 "listen_address": { 00:18:53.351 "trtype": "TCP", 00:18:53.351 "adrfam": "IPv4", 00:18:53.351 "traddr": "10.0.0.2", 00:18:53.351 "trsvcid": "4420" 00:18:53.351 }, 00:18:53.351 "peer_address": { 00:18:53.351 "trtype": "TCP", 00:18:53.351 "adrfam": "IPv4", 00:18:53.351 "traddr": "10.0.0.1", 00:18:53.351 "trsvcid": "41006" 00:18:53.351 }, 00:18:53.351 "auth": { 00:18:53.351 "state": "completed", 00:18:53.351 "digest": "sha384", 00:18:53.351 "dhgroup": "ffdhe8192" 00:18:53.351 } 00:18:53.351 } 00:18:53.351 ]' 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.351 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.611 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.611 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.611 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.611 20:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.548 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.117 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.117 { 00:18:55.117 "cntlid": 95, 00:18:55.117 "qid": 0, 00:18:55.117 "state": "enabled", 00:18:55.117 "thread": "nvmf_tgt_poll_group_000", 00:18:55.117 "listen_address": { 00:18:55.117 "trtype": "TCP", 00:18:55.117 "adrfam": "IPv4", 00:18:55.117 "traddr": "10.0.0.2", 00:18:55.117 "trsvcid": "4420" 00:18:55.117 }, 00:18:55.117 "peer_address": { 00:18:55.117 "trtype": "TCP", 00:18:55.117 "adrfam": "IPv4", 00:18:55.117 "traddr": "10.0.0.1", 00:18:55.117 "trsvcid": "45458" 00:18:55.117 }, 00:18:55.117 "auth": { 00:18:55.117 "state": "completed", 00:18:55.117 "digest": "sha384", 00:18:55.117 "dhgroup": "ffdhe8192" 00:18:55.117 } 00:18:55.117 } 00:18:55.117 ]' 00:18:55.117 20:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.375 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.634 20:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.201 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.460 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.718 00:18:56.718 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.718 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.718 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.718 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.976 { 00:18:56.976 "cntlid": 97, 00:18:56.976 "qid": 0, 00:18:56.976 "state": "enabled", 00:18:56.976 "thread": "nvmf_tgt_poll_group_000", 00:18:56.976 "listen_address": { 00:18:56.976 "trtype": "TCP", 00:18:56.976 "adrfam": "IPv4", 00:18:56.976 "traddr": "10.0.0.2", 00:18:56.976 "trsvcid": "4420" 00:18:56.976 }, 00:18:56.976 "peer_address": { 00:18:56.976 "trtype": "TCP", 00:18:56.976 "adrfam": "IPv4", 00:18:56.976 "traddr": "10.0.0.1", 00:18:56.976 "trsvcid": "45490" 00:18:56.976 }, 00:18:56.976 "auth": { 00:18:56.976 "state": "completed", 00:18:56.976 "digest": "sha512", 00:18:56.976 "dhgroup": "null" 00:18:56.976 } 00:18:56.976 } 00:18:56.976 ]' 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.976 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.234 20:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.800 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.058 20:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.316 00:18:58.316 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.316 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.316 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.574 { 00:18:58.574 "cntlid": 99, 00:18:58.574 "qid": 0, 00:18:58.574 "state": "enabled", 00:18:58.574 "thread": "nvmf_tgt_poll_group_000", 00:18:58.574 "listen_address": { 00:18:58.574 "trtype": "TCP", 00:18:58.574 "adrfam": "IPv4", 00:18:58.574 "traddr": "10.0.0.2", 00:18:58.574 "trsvcid": "4420" 00:18:58.574 }, 00:18:58.574 "peer_address": { 00:18:58.574 "trtype": "TCP", 00:18:58.574 "adrfam": "IPv4", 00:18:58.574 "traddr": "10.0.0.1", 00:18:58.574 "trsvcid": "45524" 00:18:58.574 }, 00:18:58.574 "auth": { 00:18:58.574 "state": "completed", 00:18:58.574 "digest": "sha512", 00:18:58.574 "dhgroup": "null" 00:18:58.574 } 00:18:58.574 } 00:18:58.574 ]' 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.574 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.832 20:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.764 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.022 00:19:00.022 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.022 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.023 { 00:19:00.023 "cntlid": 101, 00:19:00.023 "qid": 0, 00:19:00.023 "state": "enabled", 00:19:00.023 "thread": "nvmf_tgt_poll_group_000", 00:19:00.023 "listen_address": { 00:19:00.023 "trtype": "TCP", 00:19:00.023 "adrfam": "IPv4", 00:19:00.023 "traddr": "10.0.0.2", 00:19:00.023 "trsvcid": "4420" 00:19:00.023 }, 00:19:00.023 "peer_address": { 00:19:00.023 "trtype": "TCP", 00:19:00.023 "adrfam": "IPv4", 00:19:00.023 "traddr": "10.0.0.1", 00:19:00.023 "trsvcid": "45558" 00:19:00.023 }, 00:19:00.023 "auth": { 00:19:00.023 "state": "completed", 00:19:00.023 "digest": "sha512", 00:19:00.023 "dhgroup": "null" 00:19:00.023 } 00:19:00.023 } 00:19:00.023 ]' 00:19:00.023 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.280 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.280 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.280 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.280 20:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.280 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.280 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.280 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.537 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.101 20:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.359 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.617 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.617 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.874 { 00:19:01.874 "cntlid": 103, 00:19:01.874 "qid": 0, 00:19:01.874 "state": "enabled", 00:19:01.874 "thread": "nvmf_tgt_poll_group_000", 00:19:01.874 "listen_address": { 00:19:01.874 "trtype": "TCP", 00:19:01.874 "adrfam": "IPv4", 00:19:01.874 "traddr": "10.0.0.2", 00:19:01.874 "trsvcid": "4420" 00:19:01.874 }, 00:19:01.874 "peer_address": { 00:19:01.874 "trtype": "TCP", 00:19:01.874 "adrfam": "IPv4", 00:19:01.874 "traddr": "10.0.0.1", 00:19:01.874 "trsvcid": "45584" 00:19:01.874 }, 00:19:01.874 "auth": { 00:19:01.874 "state": "completed", 00:19:01.874 "digest": "sha512", 00:19:01.874 "dhgroup": "null" 00:19:01.874 } 00:19:01.874 } 00:19:01.874 ]' 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.874 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.131 20:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.696 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.953 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:02.953 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.953 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.954 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.212 00:19:03.212 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.212 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.212 20:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.471 { 00:19:03.471 "cntlid": 105, 00:19:03.471 "qid": 0, 00:19:03.471 "state": "enabled", 00:19:03.471 "thread": "nvmf_tgt_poll_group_000", 00:19:03.471 "listen_address": { 00:19:03.471 "trtype": "TCP", 00:19:03.471 "adrfam": "IPv4", 00:19:03.471 "traddr": "10.0.0.2", 00:19:03.471 "trsvcid": "4420" 00:19:03.471 }, 00:19:03.471 "peer_address": { 00:19:03.471 "trtype": "TCP", 00:19:03.471 "adrfam": "IPv4", 00:19:03.471 "traddr": "10.0.0.1", 00:19:03.471 "trsvcid": "45612" 00:19:03.471 }, 00:19:03.471 "auth": { 00:19:03.471 "state": "completed", 00:19:03.471 "digest": "sha512", 00:19:03.471 "dhgroup": "ffdhe2048" 00:19:03.471 } 00:19:03.471 } 00:19:03.471 ]' 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.471 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.730 20:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:04.298 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.577 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.836 00:19:04.836 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.836 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.836 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.094 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.094 { 00:19:05.094 "cntlid": 107, 00:19:05.094 "qid": 0, 00:19:05.094 "state": "enabled", 00:19:05.094 "thread": "nvmf_tgt_poll_group_000", 00:19:05.094 "listen_address": { 00:19:05.094 "trtype": "TCP", 00:19:05.094 "adrfam": "IPv4", 00:19:05.094 "traddr": "10.0.0.2", 00:19:05.094 "trsvcid": "4420" 00:19:05.094 }, 00:19:05.094 "peer_address": { 00:19:05.094 "trtype": "TCP", 00:19:05.094 "adrfam": "IPv4", 00:19:05.094 "traddr": "10.0.0.1", 00:19:05.094 "trsvcid": "54024" 00:19:05.094 }, 00:19:05.094 "auth": { 00:19:05.094 "state": "completed", 00:19:05.094 "digest": "sha512", 00:19:05.094 "dhgroup": "ffdhe2048" 00:19:05.094 } 00:19:05.095 } 00:19:05.095 ]' 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.095 20:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.375 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:05.956 20:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.215 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.474 00:19:06.474 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.474 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.474 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.733 { 00:19:06.733 "cntlid": 109, 00:19:06.733 "qid": 0, 00:19:06.733 "state": "enabled", 00:19:06.733 "thread": "nvmf_tgt_poll_group_000", 00:19:06.733 "listen_address": { 00:19:06.733 "trtype": "TCP", 00:19:06.733 "adrfam": "IPv4", 00:19:06.733 "traddr": "10.0.0.2", 00:19:06.733 "trsvcid": "4420" 00:19:06.733 }, 00:19:06.733 "peer_address": { 00:19:06.733 "trtype": "TCP", 00:19:06.733 "adrfam": "IPv4", 00:19:06.733 "traddr": "10.0.0.1", 00:19:06.733 "trsvcid": "54046" 00:19:06.733 }, 00:19:06.733 "auth": { 00:19:06.733 "state": "completed", 00:19:06.733 "digest": "sha512", 00:19:06.733 "dhgroup": "ffdhe2048" 00:19:06.733 } 00:19:06.733 } 00:19:06.733 ]' 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.733 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.992 20:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.928 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.929 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.187 00:19:08.187 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.187 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.187 20:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.187 { 00:19:08.187 "cntlid": 111, 00:19:08.187 "qid": 0, 00:19:08.187 "state": "enabled", 00:19:08.187 "thread": "nvmf_tgt_poll_group_000", 00:19:08.187 "listen_address": { 00:19:08.187 "trtype": "TCP", 00:19:08.187 "adrfam": "IPv4", 00:19:08.187 "traddr": "10.0.0.2", 00:19:08.187 "trsvcid": "4420" 00:19:08.187 }, 00:19:08.187 "peer_address": { 00:19:08.187 "trtype": "TCP", 00:19:08.187 "adrfam": "IPv4", 00:19:08.187 "traddr": "10.0.0.1", 00:19:08.187 "trsvcid": "54080" 00:19:08.187 }, 00:19:08.187 "auth": { 00:19:08.187 "state": "completed", 00:19:08.187 "digest": "sha512", 00:19:08.187 "dhgroup": "ffdhe2048" 00:19:08.187 } 00:19:08.187 } 00:19:08.187 ]' 00:19:08.187 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.446 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.705 20:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:09.293 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.556 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.816 00:19:09.816 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.816 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.816 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.074 { 00:19:10.074 "cntlid": 113, 00:19:10.074 "qid": 0, 00:19:10.074 "state": "enabled", 00:19:10.074 "thread": "nvmf_tgt_poll_group_000", 00:19:10.074 "listen_address": { 00:19:10.074 "trtype": "TCP", 00:19:10.074 "adrfam": "IPv4", 00:19:10.074 "traddr": "10.0.0.2", 00:19:10.074 "trsvcid": "4420" 00:19:10.074 }, 00:19:10.074 "peer_address": { 00:19:10.074 "trtype": "TCP", 00:19:10.074 "adrfam": "IPv4", 00:19:10.074 "traddr": "10.0.0.1", 00:19:10.074 "trsvcid": "54108" 00:19:10.074 }, 00:19:10.074 "auth": { 00:19:10.074 "state": "completed", 00:19:10.074 "digest": "sha512", 00:19:10.074 "dhgroup": "ffdhe3072" 00:19:10.074 } 00:19:10.074 } 00:19:10.074 ]' 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.074 20:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.334 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.275 20:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.275 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.276 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.535 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.535 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.794 { 00:19:11.794 "cntlid": 115, 00:19:11.794 "qid": 0, 00:19:11.794 "state": "enabled", 00:19:11.794 "thread": "nvmf_tgt_poll_group_000", 00:19:11.794 "listen_address": { 00:19:11.794 "trtype": "TCP", 00:19:11.794 "adrfam": "IPv4", 00:19:11.794 "traddr": "10.0.0.2", 00:19:11.794 "trsvcid": "4420" 00:19:11.794 }, 00:19:11.794 "peer_address": { 00:19:11.794 "trtype": "TCP", 00:19:11.794 "adrfam": "IPv4", 00:19:11.794 "traddr": "10.0.0.1", 00:19:11.794 "trsvcid": "54124" 00:19:11.794 }, 00:19:11.794 "auth": { 00:19:11.794 "state": "completed", 00:19:11.794 "digest": "sha512", 00:19:11.794 "dhgroup": "ffdhe3072" 00:19:11.794 } 00:19:11.794 } 00:19:11.794 ]' 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.794 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.052 20:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.620 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.878 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.137 00:19:13.137 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.137 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.137 20:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.396 { 00:19:13.396 "cntlid": 117, 00:19:13.396 "qid": 0, 00:19:13.396 "state": "enabled", 00:19:13.396 "thread": "nvmf_tgt_poll_group_000", 00:19:13.396 "listen_address": { 00:19:13.396 "trtype": "TCP", 00:19:13.396 "adrfam": "IPv4", 00:19:13.396 "traddr": "10.0.0.2", 00:19:13.396 "trsvcid": "4420" 00:19:13.396 }, 00:19:13.396 "peer_address": { 00:19:13.396 "trtype": "TCP", 00:19:13.396 "adrfam": "IPv4", 00:19:13.396 "traddr": "10.0.0.1", 00:19:13.396 "trsvcid": "54166" 00:19:13.396 }, 00:19:13.396 "auth": { 00:19:13.396 "state": "completed", 00:19:13.396 "digest": "sha512", 00:19:13.396 "dhgroup": "ffdhe3072" 00:19:13.396 } 00:19:13.396 } 00:19:13.396 ]' 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.396 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.655 20:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.593 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.852 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.852 { 00:19:14.852 "cntlid": 119, 00:19:14.852 "qid": 0, 00:19:14.852 "state": "enabled", 00:19:14.852 "thread": "nvmf_tgt_poll_group_000", 00:19:14.852 "listen_address": { 00:19:14.852 "trtype": "TCP", 00:19:14.852 "adrfam": "IPv4", 00:19:14.852 "traddr": "10.0.0.2", 00:19:14.852 "trsvcid": "4420" 00:19:14.852 }, 00:19:14.852 "peer_address": { 00:19:14.852 "trtype": "TCP", 00:19:14.852 "adrfam": "IPv4", 00:19:14.852 "traddr": "10.0.0.1", 00:19:14.852 "trsvcid": "57356" 00:19:14.852 }, 00:19:14.852 "auth": { 00:19:14.852 "state": "completed", 00:19:14.852 "digest": "sha512", 00:19:14.852 "dhgroup": "ffdhe3072" 00:19:14.852 } 00:19:14.852 } 00:19:14.852 ]' 00:19:14.852 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.111 20:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.370 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.937 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.196 20:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.454 00:19:16.454 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.454 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.454 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.712 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.713 { 00:19:16.713 "cntlid": 121, 00:19:16.713 "qid": 0, 00:19:16.713 "state": "enabled", 00:19:16.713 "thread": "nvmf_tgt_poll_group_000", 00:19:16.713 "listen_address": { 00:19:16.713 "trtype": "TCP", 00:19:16.713 "adrfam": "IPv4", 00:19:16.713 "traddr": "10.0.0.2", 00:19:16.713 "trsvcid": "4420" 00:19:16.713 }, 00:19:16.713 "peer_address": { 00:19:16.713 "trtype": "TCP", 00:19:16.713 "adrfam": "IPv4", 00:19:16.713 "traddr": "10.0.0.1", 00:19:16.713 "trsvcid": "57388" 00:19:16.713 }, 00:19:16.713 "auth": { 00:19:16.713 "state": "completed", 00:19:16.713 "digest": "sha512", 00:19:16.713 "dhgroup": "ffdhe4096" 00:19:16.713 } 00:19:16.713 } 00:19:16.713 ]' 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.713 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.972 20:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.910 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.170 00:19:18.170 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.170 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.170 20:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.170 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.170 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.170 20:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.170 20:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.429 { 00:19:18.429 "cntlid": 123, 00:19:18.429 "qid": 0, 00:19:18.429 "state": "enabled", 00:19:18.429 "thread": "nvmf_tgt_poll_group_000", 00:19:18.429 "listen_address": { 00:19:18.429 "trtype": "TCP", 00:19:18.429 "adrfam": "IPv4", 00:19:18.429 "traddr": "10.0.0.2", 00:19:18.429 "trsvcid": "4420" 00:19:18.429 }, 00:19:18.429 "peer_address": { 00:19:18.429 "trtype": "TCP", 00:19:18.429 "adrfam": "IPv4", 00:19:18.429 "traddr": "10.0.0.1", 00:19:18.429 "trsvcid": "57426" 00:19:18.429 }, 00:19:18.429 "auth": { 00:19:18.429 "state": "completed", 00:19:18.429 "digest": "sha512", 00:19:18.429 "dhgroup": "ffdhe4096" 00:19:18.429 } 00:19:18.429 } 00:19:18.429 ]' 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.429 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.688 20:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.270 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.532 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.789 00:19:19.789 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.789 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.789 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.084 { 00:19:20.084 "cntlid": 125, 00:19:20.084 "qid": 0, 00:19:20.084 "state": "enabled", 00:19:20.084 "thread": "nvmf_tgt_poll_group_000", 00:19:20.084 "listen_address": { 00:19:20.084 "trtype": "TCP", 00:19:20.084 "adrfam": "IPv4", 00:19:20.084 "traddr": "10.0.0.2", 00:19:20.084 "trsvcid": "4420" 00:19:20.084 }, 00:19:20.084 "peer_address": { 00:19:20.084 "trtype": "TCP", 00:19:20.084 "adrfam": "IPv4", 00:19:20.084 "traddr": "10.0.0.1", 00:19:20.084 "trsvcid": "57464" 00:19:20.084 }, 00:19:20.084 "auth": { 00:19:20.084 "state": "completed", 00:19:20.084 "digest": "sha512", 00:19:20.084 "dhgroup": "ffdhe4096" 00:19:20.084 } 00:19:20.084 } 00:19:20.084 ]' 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.084 20:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.344 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:20.911 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.912 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.171 20:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.430 00:19:21.430 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.430 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.430 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.690 { 00:19:21.690 "cntlid": 127, 00:19:21.690 "qid": 0, 00:19:21.690 "state": "enabled", 00:19:21.690 "thread": "nvmf_tgt_poll_group_000", 00:19:21.690 "listen_address": { 00:19:21.690 "trtype": "TCP", 00:19:21.690 "adrfam": "IPv4", 00:19:21.690 "traddr": "10.0.0.2", 00:19:21.690 "trsvcid": "4420" 00:19:21.690 }, 00:19:21.690 "peer_address": { 00:19:21.690 "trtype": "TCP", 00:19:21.690 "adrfam": "IPv4", 00:19:21.690 "traddr": "10.0.0.1", 00:19:21.690 "trsvcid": "57478" 00:19:21.690 }, 00:19:21.690 "auth": { 00:19:21.690 "state": "completed", 00:19:21.690 "digest": "sha512", 00:19:21.690 "dhgroup": "ffdhe4096" 00:19:21.690 } 00:19:21.690 } 00:19:21.690 ]' 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.690 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.948 20:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.887 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.147 00:19:23.147 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.147 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.147 20:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.407 { 00:19:23.407 "cntlid": 129, 00:19:23.407 "qid": 0, 00:19:23.407 "state": "enabled", 00:19:23.407 "thread": "nvmf_tgt_poll_group_000", 00:19:23.407 "listen_address": { 00:19:23.407 "trtype": "TCP", 00:19:23.407 "adrfam": "IPv4", 00:19:23.407 "traddr": "10.0.0.2", 00:19:23.407 "trsvcid": "4420" 00:19:23.407 }, 00:19:23.407 "peer_address": { 00:19:23.407 "trtype": "TCP", 00:19:23.407 "adrfam": "IPv4", 00:19:23.407 "traddr": "10.0.0.1", 00:19:23.407 "trsvcid": "57498" 00:19:23.407 }, 00:19:23.407 "auth": { 00:19:23.407 "state": "completed", 00:19:23.407 "digest": "sha512", 00:19:23.407 "dhgroup": "ffdhe6144" 00:19:23.407 } 00:19:23.407 } 00:19:23.407 ]' 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.407 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.667 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.667 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.667 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.667 20:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.606 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.607 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.607 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.607 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.866 00:19:24.866 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.866 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.866 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.125 { 00:19:25.125 "cntlid": 131, 00:19:25.125 "qid": 0, 00:19:25.125 "state": "enabled", 00:19:25.125 "thread": "nvmf_tgt_poll_group_000", 00:19:25.125 "listen_address": { 00:19:25.125 "trtype": "TCP", 00:19:25.125 "adrfam": "IPv4", 00:19:25.125 "traddr": "10.0.0.2", 00:19:25.125 "trsvcid": "4420" 00:19:25.125 }, 00:19:25.125 "peer_address": { 00:19:25.125 "trtype": "TCP", 00:19:25.125 "adrfam": "IPv4", 00:19:25.125 "traddr": "10.0.0.1", 00:19:25.125 "trsvcid": "35184" 00:19:25.125 }, 00:19:25.125 "auth": { 00:19:25.125 "state": "completed", 00:19:25.125 "digest": "sha512", 00:19:25.125 "dhgroup": "ffdhe6144" 00:19:25.125 } 00:19:25.125 } 00:19:25.125 ]' 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.125 20:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.125 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.125 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.383 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.383 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.383 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.383 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.319 20:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.319 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.320 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.896 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.896 { 00:19:26.896 "cntlid": 133, 00:19:26.896 "qid": 0, 00:19:26.896 "state": "enabled", 00:19:26.896 "thread": "nvmf_tgt_poll_group_000", 00:19:26.896 "listen_address": { 00:19:26.896 "trtype": "TCP", 00:19:26.896 "adrfam": "IPv4", 00:19:26.896 "traddr": "10.0.0.2", 00:19:26.896 "trsvcid": "4420" 00:19:26.896 }, 00:19:26.896 "peer_address": { 00:19:26.896 "trtype": "TCP", 00:19:26.896 "adrfam": "IPv4", 00:19:26.896 "traddr": "10.0.0.1", 00:19:26.896 "trsvcid": "35224" 00:19:26.896 }, 00:19:26.896 "auth": { 00:19:26.896 "state": "completed", 00:19:26.896 "digest": "sha512", 00:19:26.896 "dhgroup": "ffdhe6144" 00:19:26.896 } 00:19:26.896 } 00:19:26.896 ]' 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.896 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.155 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.155 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.155 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.155 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.155 20:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.155 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.093 20:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.662 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.662 { 00:19:28.662 "cntlid": 135, 00:19:28.662 "qid": 0, 00:19:28.662 "state": "enabled", 00:19:28.662 "thread": "nvmf_tgt_poll_group_000", 00:19:28.662 "listen_address": { 00:19:28.662 "trtype": "TCP", 00:19:28.662 "adrfam": "IPv4", 00:19:28.662 "traddr": "10.0.0.2", 00:19:28.662 "trsvcid": "4420" 00:19:28.662 }, 00:19:28.662 "peer_address": { 00:19:28.662 "trtype": "TCP", 00:19:28.662 "adrfam": "IPv4", 00:19:28.662 "traddr": "10.0.0.1", 00:19:28.662 "trsvcid": "35246" 00:19:28.662 }, 00:19:28.662 "auth": { 00:19:28.662 "state": "completed", 00:19:28.662 "digest": "sha512", 00:19:28.662 "dhgroup": "ffdhe6144" 00:19:28.662 } 00:19:28.662 } 00:19:28.662 ]' 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.662 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.922 20:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:29.859 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.859 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.860 20:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.426 00:19:30.426 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.426 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.426 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.686 { 00:19:30.686 "cntlid": 137, 00:19:30.686 "qid": 0, 00:19:30.686 "state": "enabled", 00:19:30.686 "thread": "nvmf_tgt_poll_group_000", 00:19:30.686 "listen_address": { 00:19:30.686 "trtype": "TCP", 00:19:30.686 "adrfam": "IPv4", 00:19:30.686 "traddr": "10.0.0.2", 00:19:30.686 "trsvcid": "4420" 00:19:30.686 }, 00:19:30.686 "peer_address": { 00:19:30.686 "trtype": "TCP", 00:19:30.686 "adrfam": "IPv4", 00:19:30.686 "traddr": "10.0.0.1", 00:19:30.686 "trsvcid": "35280" 00:19:30.686 }, 00:19:30.686 "auth": { 00:19:30.686 "state": "completed", 00:19:30.686 "digest": "sha512", 00:19:30.686 "dhgroup": "ffdhe8192" 00:19:30.686 } 00:19:30.686 } 00:19:30.686 ]' 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.686 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.945 20:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.881 20:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.453 00:19:32.453 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.453 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.453 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.711 { 00:19:32.711 "cntlid": 139, 00:19:32.711 "qid": 0, 00:19:32.711 "state": "enabled", 00:19:32.711 "thread": "nvmf_tgt_poll_group_000", 00:19:32.711 "listen_address": { 00:19:32.711 "trtype": "TCP", 00:19:32.711 "adrfam": "IPv4", 00:19:32.711 "traddr": "10.0.0.2", 00:19:32.711 "trsvcid": "4420" 00:19:32.711 }, 00:19:32.711 "peer_address": { 00:19:32.711 "trtype": "TCP", 00:19:32.711 "adrfam": "IPv4", 00:19:32.711 "traddr": "10.0.0.1", 00:19:32.711 "trsvcid": "35320" 00:19:32.711 }, 00:19:32.711 "auth": { 00:19:32.711 "state": "completed", 00:19:32.711 "digest": "sha512", 00:19:32.711 "dhgroup": "ffdhe8192" 00:19:32.711 } 00:19:32.711 } 00:19:32.711 ]' 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.711 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.970 20:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTYzYjIxMDMzN2YyNmFmZDlmY2UxYTVkNjE3ODU4OWX2J+qg: --dhchap-ctrl-secret DHHC-1:02:ZDM1N2RlZjIyYTlmNmU1YTc4OTljMWJkYzE5YmIxZGUyZDc0MDZjZmZjOTkwOGZlntIz8A==: 00:19:33.536 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.795 20:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.362 00:19:34.362 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.362 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.362 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.621 { 00:19:34.621 "cntlid": 141, 00:19:34.621 "qid": 0, 00:19:34.621 "state": "enabled", 00:19:34.621 "thread": "nvmf_tgt_poll_group_000", 00:19:34.621 "listen_address": { 00:19:34.621 "trtype": "TCP", 00:19:34.621 "adrfam": "IPv4", 00:19:34.621 "traddr": "10.0.0.2", 00:19:34.621 "trsvcid": "4420" 00:19:34.621 }, 00:19:34.621 "peer_address": { 00:19:34.621 "trtype": "TCP", 00:19:34.621 "adrfam": "IPv4", 00:19:34.621 "traddr": "10.0.0.1", 00:19:34.621 "trsvcid": "47018" 00:19:34.621 }, 00:19:34.621 "auth": { 00:19:34.621 "state": "completed", 00:19:34.621 "digest": "sha512", 00:19:34.621 "dhgroup": "ffdhe8192" 00:19:34.621 } 00:19:34.621 } 00:19:34.621 ]' 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.621 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.880 20:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTdjNmIxMmFmZGZlMjJlZmUwNDg5YzhmMWFjOWIxYmJlYjM0NWZiMGY5N2UzOGQ3zIFVLQ==: --dhchap-ctrl-secret DHHC-1:01:YjdhYWE2NjdmNjExZTYwMmY3OTQzNjY2NDIyNTY0NGQuOtct: 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.510 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.769 20:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.337 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.337 20:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.595 { 00:19:36.595 "cntlid": 143, 00:19:36.595 "qid": 0, 00:19:36.595 "state": "enabled", 00:19:36.595 "thread": "nvmf_tgt_poll_group_000", 00:19:36.595 "listen_address": { 00:19:36.595 "trtype": "TCP", 00:19:36.595 "adrfam": "IPv4", 00:19:36.595 "traddr": "10.0.0.2", 00:19:36.595 "trsvcid": "4420" 00:19:36.595 }, 00:19:36.595 "peer_address": { 00:19:36.595 "trtype": "TCP", 00:19:36.595 "adrfam": "IPv4", 00:19:36.595 "traddr": "10.0.0.1", 00:19:36.595 "trsvcid": "47030" 00:19:36.595 }, 00:19:36.595 "auth": { 00:19:36.595 "state": "completed", 00:19:36.595 "digest": "sha512", 00:19:36.595 "dhgroup": "ffdhe8192" 00:19:36.595 } 00:19:36.595 } 00:19:36.595 ]' 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.595 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.854 20:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.423 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.682 20:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.250 00:19:38.250 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.250 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.250 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.509 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.509 { 00:19:38.509 "cntlid": 145, 00:19:38.509 "qid": 0, 00:19:38.509 "state": "enabled", 00:19:38.509 "thread": "nvmf_tgt_poll_group_000", 00:19:38.509 "listen_address": { 00:19:38.509 "trtype": "TCP", 00:19:38.509 "adrfam": "IPv4", 00:19:38.510 "traddr": "10.0.0.2", 00:19:38.510 "trsvcid": "4420" 00:19:38.510 }, 00:19:38.510 "peer_address": { 00:19:38.510 "trtype": "TCP", 00:19:38.510 "adrfam": "IPv4", 00:19:38.510 "traddr": "10.0.0.1", 00:19:38.510 "trsvcid": "47048" 00:19:38.510 }, 00:19:38.510 "auth": { 00:19:38.510 "state": "completed", 00:19:38.510 "digest": "sha512", 00:19:38.510 "dhgroup": "ffdhe8192" 00:19:38.510 } 00:19:38.510 } 00:19:38.510 ]' 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.510 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.769 20:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MjgzMjJlMjM0MDc4MTU0NzBlNzY5Y2VhZmFjODJhZWQwNjFjOWYyNmM4YjI4Yzk1ZEZccw==: --dhchap-ctrl-secret DHHC-1:03:OTU4MmU4MzExMjJlYjgzODJjMzU5NDg1MTNjZTljYjdiMGRjMjVkMjBlMTM5ZjU2M2E0MTlmOTYxYjAwYjg2YviLzRA=: 00:19:39.337 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:39.597 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:39.856 request: 00:19:39.856 { 00:19:39.856 "name": "nvme0", 00:19:39.856 "trtype": "tcp", 00:19:39.856 "traddr": "10.0.0.2", 00:19:39.856 "adrfam": "ipv4", 00:19:39.856 "trsvcid": "4420", 00:19:39.856 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.856 "prchk_reftag": false, 00:19:39.856 "prchk_guard": false, 00:19:39.856 "hdgst": false, 00:19:39.856 "ddgst": false, 00:19:39.856 "dhchap_key": "key2", 00:19:39.856 "method": "bdev_nvme_attach_controller", 00:19:39.856 "req_id": 1 00:19:39.856 } 00:19:39.856 Got JSON-RPC error response 00:19:39.856 response: 00:19:39.856 { 00:19:39.856 "code": -5, 00:19:39.856 "message": "Input/output error" 00:19:39.856 } 00:19:39.856 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:39.856 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.114 20:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.372 request: 00:19:40.372 { 00:19:40.372 "name": "nvme0", 00:19:40.372 "trtype": "tcp", 00:19:40.372 "traddr": "10.0.0.2", 00:19:40.372 "adrfam": "ipv4", 00:19:40.372 "trsvcid": "4420", 00:19:40.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.372 "prchk_reftag": false, 00:19:40.372 "prchk_guard": false, 00:19:40.372 "hdgst": false, 00:19:40.372 "ddgst": false, 00:19:40.372 "dhchap_key": "key1", 00:19:40.372 "dhchap_ctrlr_key": "ckey2", 00:19:40.372 "method": "bdev_nvme_attach_controller", 00:19:40.372 "req_id": 1 00:19:40.372 } 00:19:40.372 Got JSON-RPC error response 00:19:40.372 response: 00:19:40.372 { 00:19:40.372 "code": -5, 00:19:40.372 "message": "Input/output error" 00:19:40.372 } 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.372 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.373 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:40.373 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.373 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.632 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.891 request: 00:19:40.891 { 00:19:40.891 "name": "nvme0", 00:19:40.891 "trtype": "tcp", 00:19:40.891 "traddr": "10.0.0.2", 00:19:40.891 "adrfam": "ipv4", 00:19:40.891 "trsvcid": "4420", 00:19:40.891 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.891 "prchk_reftag": false, 00:19:40.891 "prchk_guard": false, 00:19:40.891 "hdgst": false, 00:19:40.891 "ddgst": false, 00:19:40.891 "dhchap_key": "key1", 00:19:40.891 "dhchap_ctrlr_key": "ckey1", 00:19:40.891 "method": "bdev_nvme_attach_controller", 00:19:40.891 "req_id": 1 00:19:40.891 } 00:19:40.891 Got JSON-RPC error response 00:19:40.891 response: 00:19:40.891 { 00:19:40.891 "code": -5, 00:19:40.891 "message": "Input/output error" 00:19:40.891 } 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1576268 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1576268 ']' 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1576268 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.891 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576268 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576268' 00:19:41.151 killing process with pid 1576268 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1576268 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1576268 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1603227 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1603227 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1603227 ']' 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.151 20:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1603227 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1603227 ']' 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.087 20:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.346 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.913 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.913 { 00:19:42.913 "cntlid": 1, 00:19:42.913 "qid": 0, 00:19:42.913 "state": "enabled", 00:19:42.913 "thread": "nvmf_tgt_poll_group_000", 00:19:42.913 "listen_address": { 00:19:42.913 "trtype": "TCP", 00:19:42.913 "adrfam": "IPv4", 00:19:42.913 "traddr": "10.0.0.2", 00:19:42.913 "trsvcid": "4420" 00:19:42.913 }, 00:19:42.913 "peer_address": { 00:19:42.913 "trtype": "TCP", 00:19:42.913 "adrfam": "IPv4", 00:19:42.913 "traddr": "10.0.0.1", 00:19:42.913 "trsvcid": "47098" 00:19:42.913 }, 00:19:42.913 "auth": { 00:19:42.913 "state": "completed", 00:19:42.913 "digest": "sha512", 00:19:42.913 "dhgroup": "ffdhe8192" 00:19:42.913 } 00:19:42.913 } 00:19:42.913 ]' 00:19:42.913 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.171 20:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.429 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YWE3NWY2ZTM5NjI1OGYzY2JhMTNjYzE4MWYzZmQ2Nzg4Y2U1ZTQ4NGUwZjRjYmRjMDdiZjUwNDUzZTM0YTY2MCl5KaU=: 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:43.996 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:44.255 20:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.255 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.255 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.514 request: 00:19:44.514 { 00:19:44.514 "name": "nvme0", 00:19:44.514 "trtype": "tcp", 00:19:44.514 "traddr": "10.0.0.2", 00:19:44.514 "adrfam": "ipv4", 00:19:44.514 "trsvcid": "4420", 00:19:44.514 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:44.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.514 "prchk_reftag": false, 00:19:44.514 "prchk_guard": false, 00:19:44.514 "hdgst": false, 00:19:44.514 "ddgst": false, 00:19:44.514 "dhchap_key": "key3", 00:19:44.514 "method": "bdev_nvme_attach_controller", 00:19:44.514 "req_id": 1 00:19:44.514 } 00:19:44.514 Got JSON-RPC error response 00:19:44.514 response: 00:19:44.514 { 00:19:44.514 "code": -5, 00:19:44.514 "message": "Input/output error" 00:19:44.514 } 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.514 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.773 request: 00:19:44.773 { 00:19:44.773 "name": "nvme0", 00:19:44.773 "trtype": "tcp", 00:19:44.773 "traddr": "10.0.0.2", 00:19:44.773 "adrfam": "ipv4", 00:19:44.773 "trsvcid": "4420", 00:19:44.773 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:44.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.773 "prchk_reftag": false, 00:19:44.773 "prchk_guard": false, 00:19:44.773 "hdgst": false, 00:19:44.773 "ddgst": false, 00:19:44.773 "dhchap_key": "key3", 00:19:44.773 "method": "bdev_nvme_attach_controller", 00:19:44.773 "req_id": 1 00:19:44.773 } 00:19:44.773 Got JSON-RPC error response 00:19:44.773 response: 00:19:44.773 { 00:19:44.773 "code": -5, 00:19:44.773 "message": "Input/output error" 00:19:44.773 } 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.773 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.032 request: 00:19:45.032 { 00:19:45.032 "name": "nvme0", 00:19:45.032 "trtype": "tcp", 00:19:45.032 "traddr": "10.0.0.2", 00:19:45.032 "adrfam": "ipv4", 00:19:45.032 "trsvcid": "4420", 00:19:45.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.032 "prchk_reftag": false, 00:19:45.032 "prchk_guard": false, 00:19:45.032 "hdgst": false, 00:19:45.032 "ddgst": false, 00:19:45.032 "dhchap_key": "key0", 00:19:45.032 "dhchap_ctrlr_key": "key1", 00:19:45.032 "method": "bdev_nvme_attach_controller", 00:19:45.032 "req_id": 1 00:19:45.032 } 00:19:45.032 Got JSON-RPC error response 00:19:45.032 response: 00:19:45.032 { 00:19:45.032 "code": -5, 00:19:45.032 "message": "Input/output error" 00:19:45.032 } 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:45.032 20:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:45.290 00:19:45.290 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:45.290 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:45.290 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.549 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.549 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.549 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1576306 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1576306 ']' 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1576306 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576306 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576306' 00:19:45.550 killing process with pid 1576306 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1576306 00:19:45.550 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1576306 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.809 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.809 rmmod nvme_tcp 00:19:45.809 rmmod nvme_fabrics 00:19:45.809 rmmod nvme_keyring 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1603227 ']' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1603227 ']' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603227' 00:19:46.070 killing process with pid 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1603227 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.070 20:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.614 20:55:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.614 20:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Am8 /tmp/spdk.key-sha256.Vzc /tmp/spdk.key-sha384.Rbr /tmp/spdk.key-sha512.yOz /tmp/spdk.key-sha512.r8o /tmp/spdk.key-sha384.8in /tmp/spdk.key-sha256.MN6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:48.614 00:19:48.614 real 2m24.370s 00:19:48.614 user 5m20.842s 00:19:48.614 sys 0m21.469s 00:19:48.614 20:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.614 20:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.614 ************************************ 00:19:48.615 END TEST nvmf_auth_target 00:19:48.615 ************************************ 00:19:48.615 20:55:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:48.615 20:55:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:48.615 20:55:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.615 20:55:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:48.615 20:55:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.615 20:55:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.615 ************************************ 00:19:48.615 START TEST nvmf_bdevio_no_huge 00:19:48.615 ************************************ 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.615 * Looking for test storage... 00:19:48.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.615 20:55:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:55.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:55.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:55.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:55.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.268 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:19:55.530 00:19:55.530 --- 10.0.0.2 ping statistics --- 00:19:55.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.530 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:19:55.530 00:19:55.530 --- 10.0.0.1 ping statistics --- 00:19:55.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.530 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.530 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.791 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:55.791 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1608276 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1608276 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1608276 ']' 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.792 20:55:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.792 [2024-07-15 20:55:59.510481] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:19:55.792 [2024-07-15 20:55:59.510548] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:55.792 [2024-07-15 20:55:59.602901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.053 [2024-07-15 20:55:59.709601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.053 [2024-07-15 20:55:59.709654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.053 [2024-07-15 20:55:59.709662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.053 [2024-07-15 20:55:59.709669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.053 [2024-07-15 20:55:59.709676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.053 [2024-07-15 20:55:59.709842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.053 [2024-07-15 20:55:59.709979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:56.053 [2024-07-15 20:55:59.710169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:56.053 [2024-07-15 20:55:59.710221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 [2024-07-15 20:56:00.361762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 Malloc0 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.626 [2024-07-15 20:56:00.415547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.626 { 00:19:56.626 "params": { 00:19:56.626 "name": "Nvme$subsystem", 00:19:56.626 "trtype": "$TEST_TRANSPORT", 00:19:56.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.626 "adrfam": "ipv4", 00:19:56.626 "trsvcid": "$NVMF_PORT", 00:19:56.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.626 "hdgst": ${hdgst:-false}, 00:19:56.626 "ddgst": ${ddgst:-false} 00:19:56.626 }, 00:19:56.626 "method": "bdev_nvme_attach_controller" 00:19:56.626 } 00:19:56.626 EOF 00:19:56.626 )") 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:56.626 20:56:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.626 "params": { 00:19:56.626 "name": "Nvme1", 00:19:56.626 "trtype": "tcp", 00:19:56.626 "traddr": "10.0.0.2", 00:19:56.626 "adrfam": "ipv4", 00:19:56.626 "trsvcid": "4420", 00:19:56.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.626 "hdgst": false, 00:19:56.626 "ddgst": false 00:19:56.626 }, 00:19:56.626 "method": "bdev_nvme_attach_controller" 00:19:56.626 }' 00:19:56.626 [2024-07-15 20:56:00.479178] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:19:56.626 [2024-07-15 20:56:00.479246] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1608537 ] 00:19:56.887 [2024-07-15 20:56:00.548143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:56.887 [2024-07-15 20:56:00.645384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.887 [2024-07-15 20:56:00.645581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.887 [2024-07-15 20:56:00.645586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.148 I/O targets: 00:19:57.148 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:57.148 00:19:57.148 00:19:57.148 CUnit - A unit testing framework for C - Version 2.1-3 00:19:57.148 http://cunit.sourceforge.net/ 00:19:57.148 00:19:57.148 00:19:57.148 Suite: bdevio tests on: Nvme1n1 00:19:57.148 Test: blockdev write read block ...passed 00:19:57.148 Test: blockdev write zeroes read block ...passed 00:19:57.408 Test: blockdev write zeroes read no split ...passed 00:19:57.408 Test: blockdev write zeroes read split ...passed 00:19:57.408 Test: blockdev write zeroes read split partial ...passed 00:19:57.408 Test: blockdev reset ...[2024-07-15 20:56:01.099593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.408 [2024-07-15 20:56:01.099650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f54c10 (9): Bad file descriptor 00:19:57.408 [2024-07-15 20:56:01.114886] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:57.408 passed 00:19:57.408 Test: blockdev write read 8 blocks ...passed 00:19:57.408 Test: blockdev write read size > 128k ...passed 00:19:57.408 Test: blockdev write read invalid size ...passed 00:19:57.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:57.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:57.408 Test: blockdev write read max offset ...passed 00:19:57.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:57.408 Test: blockdev writev readv 8 blocks ...passed 00:19:57.408 Test: blockdev writev readv 30 x 1block ...passed 00:19:57.670 Test: blockdev writev readv block ...passed 00:19:57.670 Test: blockdev writev readv size > 128k ...passed 00:19:57.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:57.670 Test: blockdev comparev and writev ...[2024-07-15 20:56:01.333201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.333228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.333239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.333245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.333639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.333648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.333657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.333663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.334063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.334070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.334079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.334084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.334437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.334444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.334454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:57.670 [2024-07-15 20:56:01.334459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:57.670 passed 00:19:57.670 Test: blockdev nvme passthru rw ...passed 00:19:57.670 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:56:01.416602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.670 [2024-07-15 20:56:01.416611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.416841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.670 [2024-07-15 20:56:01.416847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.417097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.670 [2024-07-15 20:56:01.417109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:57.670 [2024-07-15 20:56:01.417379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:57.670 [2024-07-15 20:56:01.417386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:57.670 passed 00:19:57.670 Test: blockdev nvme admin passthru ...passed 00:19:57.670 Test: blockdev copy ...passed 00:19:57.670 00:19:57.670 Run Summary: Type Total Ran Passed Failed Inactive 00:19:57.670 suites 1 1 n/a 0 0 00:19:57.670 tests 23 23 23 0 0 00:19:57.670 asserts 152 152 152 0 n/a 00:19:57.670 00:19:57.670 Elapsed time = 1.056 seconds 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.932 rmmod nvme_tcp 00:19:57.932 rmmod nvme_fabrics 00:19:57.932 rmmod nvme_keyring 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1608276 ']' 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1608276 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1608276 ']' 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1608276 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.932 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1608276 00:19:58.193 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:58.193 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:58.193 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1608276' 00:19:58.193 killing process with pid 1608276 00:19:58.193 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1608276 00:19:58.193 20:56:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1608276 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.452 20:56:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.364 20:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:00.364 00:20:00.364 real 0m12.097s 00:20:00.364 user 0m13.807s 00:20:00.364 sys 0m6.299s 00:20:00.364 20:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.364 20:56:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 ************************************ 00:20:00.364 END TEST nvmf_bdevio_no_huge 00:20:00.364 ************************************ 00:20:00.364 20:56:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:00.364 20:56:04 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:00.364 20:56:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:00.364 20:56:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.364 20:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.364 ************************************ 00:20:00.364 START TEST nvmf_tls 00:20:00.364 ************************************ 00:20:00.364 20:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:00.625 * Looking for test storage... 00:20:00.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.625 20:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.625 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.626 20:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:08.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:08.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:08.772 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:08.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:20:08.772 00:20:08.772 --- 10.0.0.2 ping statistics --- 00:20:08.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.772 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:20:08.772 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:20:08.772 00:20:08.772 --- 10.0.0.1 ping statistics --- 00:20:08.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.772 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1612957 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1612957 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1612957 ']' 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.773 20:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.773 [2024-07-15 20:56:11.525899] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:08.773 [2024-07-15 20:56:11.525948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.773 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.773 [2024-07-15 20:56:11.610334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.773 [2024-07-15 20:56:11.692533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.773 [2024-07-15 20:56:11.692590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.773 [2024-07-15 20:56:11.692598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.773 [2024-07-15 20:56:11.692605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.773 [2024-07-15 20:56:11.692612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.773 [2024-07-15 20:56:11.692643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:08.773 true 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.773 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:09.034 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:09.034 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:09.034 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.034 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.034 20:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:09.295 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:09.295 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:09.295 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.556 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:09.816 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:09.816 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:09.816 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:10.077 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.077 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:10.077 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:10.077 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:10.077 20:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:10.337 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.337 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ZjY0TIshd7 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.WunW11Eceq 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ZjY0TIshd7 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WunW11Eceq 00:20:10.611 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.871 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:10.871 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ZjY0TIshd7 00:20:10.871 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZjY0TIshd7 00:20:10.871 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.131 [2024-07-15 20:56:14.876744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.131 20:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.390 20:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.390 [2024-07-15 20:56:15.189498] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.390 [2024-07-15 20:56:15.189691] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.390 20:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.649 malloc0 00:20:11.649 20:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.649 20:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZjY0TIshd7 00:20:11.910 [2024-07-15 20:56:15.648679] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.910 20:56:15 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZjY0TIshd7 00:20:11.910 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.977 Initializing NVMe Controllers 00:20:21.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.977 Initialization complete. Launching workers. 00:20:21.977 ======================================================== 00:20:21.977 Latency(us) 00:20:21.977 Device Information : IOPS MiB/s Average min max 00:20:21.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18852.68 73.64 3394.76 1108.50 5893.71 00:20:21.977 ======================================================== 00:20:21.977 Total : 18852.68 73.64 3394.76 1108.50 5893.71 00:20:21.977 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjY0TIshd7 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZjY0TIshd7' 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1615696 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1615696 /var/tmp/bdevperf.sock 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1615696 ']' 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.977 20:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.977 [2024-07-15 20:56:25.810039] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:21.977 [2024-07-15 20:56:25.810166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615696 ] 00:20:21.978 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.978 [2024-07-15 20:56:25.866120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.238 [2024-07-15 20:56:25.918552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.810 20:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.810 20:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.810 20:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZjY0TIshd7 00:20:22.810 [2024-07-15 20:56:26.691354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.810 [2024-07-15 20:56:26.691414] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:23.070 TLSTESTn1 00:20:23.071 20:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.071 Running I/O for 10 seconds... 00:20:35.304 00:20:35.304 Latency(us) 00:20:35.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.304 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.304 Verification LBA range: start 0x0 length 0x2000 00:20:35.304 TLSTESTn1 : 10.08 2578.63 10.07 0.00 0.00 49459.85 6307.84 141557.76 00:20:35.304 =================================================================================================================== 00:20:35.304 Total : 2578.63 10.07 0.00 0.00 49459.85 6307.84 141557.76 00:20:35.304 0 00:20:35.304 20:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1615696 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1615696 ']' 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1615696 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1615696 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1615696' 00:20:35.304 killing process with pid 1615696 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1615696 00:20:35.304 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.304 00:20:35.304 Latency(us) 00:20:35.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.304 =================================================================================================================== 00:20:35.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.304 [2024-07-15 20:56:37.057949] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1615696 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WunW11Eceq 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WunW11Eceq 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WunW11Eceq 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WunW11Eceq' 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1617956 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1617956 /var/tmp/bdevperf.sock 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1617956 ']' 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.304 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.304 [2024-07-15 20:56:37.233393] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:35.304 [2024-07-15 20:56:37.233463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617956 ] 00:20:35.304 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.305 [2024-07-15 20:56:37.282055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.305 [2024-07-15 20:56:37.334118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.305 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.305 20:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.305 20:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WunW11Eceq 00:20:35.305 [2024-07-15 20:56:38.138949] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.305 [2024-07-15 20:56:38.139007] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.305 [2024-07-15 20:56:38.147280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.305 [2024-07-15 20:56:38.148070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ddec0 (107): Transport endpoint is not connected 00:20:35.305 [2024-07-15 20:56:38.149064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ddec0 (9): Bad file descriptor 00:20:35.305 [2024-07-15 20:56:38.150066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.305 [2024-07-15 20:56:38.150075] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.305 [2024-07-15 20:56:38.150081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.305 request: 00:20:35.305 { 00:20:35.305 "name": "TLSTEST", 00:20:35.305 "trtype": "tcp", 00:20:35.305 "traddr": "10.0.0.2", 00:20:35.305 "adrfam": "ipv4", 00:20:35.305 "trsvcid": "4420", 00:20:35.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.305 "prchk_reftag": false, 00:20:35.305 "prchk_guard": false, 00:20:35.305 "hdgst": false, 00:20:35.305 "ddgst": false, 00:20:35.305 "psk": "/tmp/tmp.WunW11Eceq", 00:20:35.305 "method": "bdev_nvme_attach_controller", 00:20:35.305 "req_id": 1 00:20:35.305 } 00:20:35.305 Got JSON-RPC error response 00:20:35.305 response: 00:20:35.305 { 00:20:35.305 "code": -5, 00:20:35.305 "message": "Input/output error" 00:20:35.305 } 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1617956 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1617956 ']' 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1617956 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1617956 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1617956' 00:20:35.305 killing process with pid 1617956 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1617956 00:20:35.305 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.305 00:20:35.305 Latency(us) 00:20:35.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.305 =================================================================================================================== 00:20:35.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.305 [2024-07-15 20:56:38.235017] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1617956 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZjY0TIshd7 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZjY0TIshd7 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZjY0TIshd7 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZjY0TIshd7' 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1618053 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1618053 /var/tmp/bdevperf.sock 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1618053 ']' 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.305 20:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.305 [2024-07-15 20:56:38.391418] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:35.305 [2024-07-15 20:56:38.391472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618053 ] 00:20:35.305 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.305 [2024-07-15 20:56:38.441357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.305 [2024-07-15 20:56:38.492513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.305 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.305 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.305 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ZjY0TIshd7 00:20:35.566 [2024-07-15 20:56:39.301601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.566 [2024-07-15 20:56:39.301667] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.566 [2024-07-15 20:56:39.310026] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.566 [2024-07-15 20:56:39.310044] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.566 [2024-07-15 20:56:39.310065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.566 [2024-07-15 20:56:39.310788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef4ec0 (107): Transport endpoint is not connected 00:20:35.567 [2024-07-15 20:56:39.311781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef4ec0 (9): Bad file descriptor 00:20:35.567 [2024-07-15 20:56:39.312782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.567 [2024-07-15 20:56:39.312788] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.567 [2024-07-15 20:56:39.312794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.567 request: 00:20:35.567 { 00:20:35.567 "name": "TLSTEST", 00:20:35.567 "trtype": "tcp", 00:20:35.567 "traddr": "10.0.0.2", 00:20:35.567 "adrfam": "ipv4", 00:20:35.567 "trsvcid": "4420", 00:20:35.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.567 "prchk_reftag": false, 00:20:35.567 "prchk_guard": false, 00:20:35.567 "hdgst": false, 00:20:35.567 "ddgst": false, 00:20:35.567 "psk": "/tmp/tmp.ZjY0TIshd7", 00:20:35.567 "method": "bdev_nvme_attach_controller", 00:20:35.567 "req_id": 1 00:20:35.567 } 00:20:35.567 Got JSON-RPC error response 00:20:35.567 response: 00:20:35.567 { 00:20:35.567 "code": -5, 00:20:35.567 "message": "Input/output error" 00:20:35.567 } 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1618053 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1618053 ']' 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1618053 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618053 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618053' 00:20:35.567 killing process with pid 1618053 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1618053 00:20:35.567 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.567 00:20:35.567 Latency(us) 00:20:35.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.567 =================================================================================================================== 00:20:35.567 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.567 [2024-07-15 20:56:39.394977] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.567 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1618053 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjY0TIshd7 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjY0TIshd7 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjY0TIshd7 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZjY0TIshd7' 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1618386 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1618386 /var/tmp/bdevperf.sock 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1618386 ']' 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.828 20:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.828 [2024-07-15 20:56:39.551761] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:35.828 [2024-07-15 20:56:39.551819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618386 ] 00:20:35.828 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.828 [2024-07-15 20:56:39.601938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.828 [2024-07-15 20:56:39.652395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZjY0TIshd7 00:20:36.771 [2024-07-15 20:56:40.469698] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.771 [2024-07-15 20:56:40.469762] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.771 [2024-07-15 20:56:40.473987] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.771 [2024-07-15 20:56:40.474008] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.771 [2024-07-15 20:56:40.474028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.771 [2024-07-15 20:56:40.474675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa62ec0 (107): Transport endpoint is not connected 00:20:36.771 [2024-07-15 20:56:40.475665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa62ec0 (9): Bad file descriptor 00:20:36.771 [2024-07-15 20:56:40.476667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:36.771 [2024-07-15 20:56:40.476674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.771 [2024-07-15 20:56:40.476681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:36.771 request: 00:20:36.771 { 00:20:36.771 "name": "TLSTEST", 00:20:36.771 "trtype": "tcp", 00:20:36.771 "traddr": "10.0.0.2", 00:20:36.771 "adrfam": "ipv4", 00:20:36.771 "trsvcid": "4420", 00:20:36.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.771 "prchk_reftag": false, 00:20:36.771 "prchk_guard": false, 00:20:36.771 "hdgst": false, 00:20:36.771 "ddgst": false, 00:20:36.771 "psk": "/tmp/tmp.ZjY0TIshd7", 00:20:36.771 "method": "bdev_nvme_attach_controller", 00:20:36.771 "req_id": 1 00:20:36.771 } 00:20:36.771 Got JSON-RPC error response 00:20:36.771 response: 00:20:36.771 { 00:20:36.771 "code": -5, 00:20:36.771 "message": "Input/output error" 00:20:36.771 } 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1618386 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1618386 ']' 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1618386 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618386 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618386' 00:20:36.771 killing process with pid 1618386 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1618386 00:20:36.771 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.771 00:20:36.771 Latency(us) 00:20:36.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.771 =================================================================================================================== 00:20:36.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.771 [2024-07-15 20:56:40.562039] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.771 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1618386 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.032 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1618680 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1618680 /var/tmp/bdevperf.sock 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1618680 ']' 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.033 20:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.033 [2024-07-15 20:56:40.721097] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:37.033 [2024-07-15 20:56:40.721167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618680 ] 00:20:37.033 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.033 [2024-07-15 20:56:40.771030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.033 [2024-07-15 20:56:40.822795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:37.975 [2024-07-15 20:56:41.635716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.975 [2024-07-15 20:56:41.637462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaed4a0 (9): Bad file descriptor 00:20:37.975 [2024-07-15 20:56:41.638461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.975 [2024-07-15 20:56:41.638470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.975 [2024-07-15 20:56:41.638477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.975 request: 00:20:37.975 { 00:20:37.975 "name": "TLSTEST", 00:20:37.975 "trtype": "tcp", 00:20:37.975 "traddr": "10.0.0.2", 00:20:37.975 "adrfam": "ipv4", 00:20:37.975 "trsvcid": "4420", 00:20:37.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.975 "prchk_reftag": false, 00:20:37.975 "prchk_guard": false, 00:20:37.975 "hdgst": false, 00:20:37.975 "ddgst": false, 00:20:37.975 "method": "bdev_nvme_attach_controller", 00:20:37.975 "req_id": 1 00:20:37.975 } 00:20:37.975 Got JSON-RPC error response 00:20:37.975 response: 00:20:37.975 { 00:20:37.975 "code": -5, 00:20:37.975 "message": "Input/output error" 00:20:37.975 } 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1618680 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1618680 ']' 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1618680 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618680 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618680' 00:20:37.975 killing process with pid 1618680 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1618680 00:20:37.975 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.975 00:20:37.975 Latency(us) 00:20:37.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.975 =================================================================================================================== 00:20:37.975 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1618680 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1612957 00:20:37.975 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1612957 ']' 00:20:37.976 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1612957 00:20:37.976 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.976 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.976 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612957 00:20:38.237 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:38.237 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:38.237 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612957' 00:20:38.237 killing process with pid 1612957 00:20:38.237 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1612957 00:20:38.237 [2024-07-15 20:56:41.882994] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:38.237 20:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1612957 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.sPMzwTZbj9 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.sPMzwTZbj9 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1618889 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1618889 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1618889 ']' 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.237 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 [2024-07-15 20:56:42.127367] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:38.237 [2024-07-15 20:56:42.127427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.498 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.498 [2024-07-15 20:56:42.209937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.498 [2024-07-15 20:56:42.265370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.498 [2024-07-15 20:56:42.265401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.498 [2024-07-15 20:56:42.265406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.498 [2024-07-15 20:56:42.265411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.498 [2024-07-15 20:56:42.265415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.498 [2024-07-15 20:56:42.265434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sPMzwTZbj9 00:20:39.071 20:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.332 [2024-07-15 20:56:43.054973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.332 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.592 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.592 [2024-07-15 20:56:43.363720] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.592 [2024-07-15 20:56:43.363886] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.592 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.852 malloc0 00:20:39.852 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.852 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:40.114 [2024-07-15 20:56:43.794570] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sPMzwTZbj9 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sPMzwTZbj9' 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1619259 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1619259 /var/tmp/bdevperf.sock 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1619259 ']' 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.114 20:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.114 [2024-07-15 20:56:43.833463] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:40.114 [2024-07-15 20:56:43.833501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619259 ] 00:20:40.114 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.114 [2024-07-15 20:56:43.874896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.114 [2024-07-15 20:56:43.927053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.376 20:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.376 20:56:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.376 20:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:40.376 [2024-07-15 20:56:44.146435] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.376 [2024-07-15 20:56:44.146492] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.376 TLSTESTn1 00:20:40.376 20:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.638 Running I/O for 10 seconds... 00:20:50.785 00:20:50.785 Latency(us) 00:20:50.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.785 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.786 Verification LBA range: start 0x0 length 0x2000 00:20:50.786 TLSTESTn1 : 10.03 2458.58 9.60 0.00 0.00 51975.80 4778.67 105294.51 00:20:50.786 =================================================================================================================== 00:20:50.786 Total : 2458.58 9.60 0.00 0.00 51975.80 4778.67 105294.51 00:20:50.786 0 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1619259 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1619259 ']' 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1619259 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1619259 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1619259' 00:20:50.786 killing process with pid 1619259 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1619259 00:20:50.786 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.786 00:20:50.786 Latency(us) 00:20:50.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.786 =================================================================================================================== 00:20:50.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.786 [2024-07-15 20:56:54.470111] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1619259 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.sPMzwTZbj9 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sPMzwTZbj9 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sPMzwTZbj9 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sPMzwTZbj9 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sPMzwTZbj9' 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1621440 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1621440 /var/tmp/bdevperf.sock 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1621440 ']' 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.786 20:56:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.786 [2024-07-15 20:56:54.646279] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:50.786 [2024-07-15 20:56:54.646343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621440 ] 00:20:50.786 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.046 [2024-07-15 20:56:54.696914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.046 [2024-07-15 20:56:54.748555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.616 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.616 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:51.616 20:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:51.876 [2024-07-15 20:56:55.545537] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.876 [2024-07-15 20:56:55.545576] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:51.876 [2024-07-15 20:56:55.545581] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.sPMzwTZbj9 00:20:51.876 request: 00:20:51.876 { 00:20:51.876 "name": "TLSTEST", 00:20:51.876 "trtype": "tcp", 00:20:51.876 "traddr": "10.0.0.2", 00:20:51.876 "adrfam": "ipv4", 00:20:51.876 "trsvcid": "4420", 00:20:51.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.876 "prchk_reftag": false, 00:20:51.876 "prchk_guard": false, 00:20:51.876 "hdgst": false, 00:20:51.876 "ddgst": false, 00:20:51.876 "psk": "/tmp/tmp.sPMzwTZbj9", 00:20:51.876 "method": "bdev_nvme_attach_controller", 00:20:51.876 "req_id": 1 00:20:51.876 } 00:20:51.876 Got JSON-RPC error response 00:20:51.876 response: 00:20:51.876 { 00:20:51.876 "code": -1, 00:20:51.876 "message": "Operation not permitted" 00:20:51.876 } 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1621440 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1621440 ']' 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1621440 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1621440 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1621440' 00:20:51.876 killing process with pid 1621440 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1621440 00:20:51.876 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.876 00:20:51.876 Latency(us) 00:20:51.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.876 =================================================================================================================== 00:20:51.876 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1621440 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1618889 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1618889 ']' 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1618889 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.876 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618889 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618889' 00:20:52.137 killing process with pid 1618889 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1618889 00:20:52.137 [2024-07-15 20:56:55.788773] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1618889 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1621596 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1621596 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1621596 ']' 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.137 20:56:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.137 [2024-07-15 20:56:55.973728] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:52.137 [2024-07-15 20:56:55.973785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.137 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.397 [2024-07-15 20:56:56.054424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.397 [2024-07-15 20:56:56.114259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.397 [2024-07-15 20:56:56.114294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.397 [2024-07-15 20:56:56.114300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.397 [2024-07-15 20:56:56.114305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.397 [2024-07-15 20:56:56.114309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.397 [2024-07-15 20:56:56.114326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sPMzwTZbj9 00:20:52.967 20:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.227 [2024-07-15 20:56:56.909157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.227 20:56:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.227 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.487 [2024-07-15 20:56:57.201863] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.487 [2024-07-15 20:56:57.202031] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.487 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.487 malloc0 00:20:53.487 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.747 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:54.007 [2024-07-15 20:56:57.648734] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.007 [2024-07-15 20:56:57.648753] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:54.007 [2024-07-15 20:56:57.648773] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:54.007 request: 00:20:54.007 { 00:20:54.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.007 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.007 "psk": "/tmp/tmp.sPMzwTZbj9", 00:20:54.007 "method": "nvmf_subsystem_add_host", 00:20:54.007 "req_id": 1 00:20:54.007 } 00:20:54.007 Got JSON-RPC error response 00:20:54.007 response: 00:20:54.007 { 00:20:54.007 "code": -32603, 00:20:54.007 "message": "Internal error" 00:20:54.007 } 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1621596 ']' 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1621596' 00:20:54.008 killing process with pid 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1621596 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.sPMzwTZbj9 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1622055 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1622055 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1622055 ']' 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.008 20:56:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.008 [2024-07-15 20:56:57.894839] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:54.008 [2024-07-15 20:56:57.894891] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.268 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.268 [2024-07-15 20:56:57.976640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.268 [2024-07-15 20:56:58.038115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.268 [2024-07-15 20:56:58.038158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.268 [2024-07-15 20:56:58.038163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.268 [2024-07-15 20:56:58.038168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.268 [2024-07-15 20:56:58.038172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.268 [2024-07-15 20:56:58.038189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sPMzwTZbj9 00:20:54.839 20:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.099 [2024-07-15 20:56:58.829744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.099 20:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.359 20:56:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.359 [2024-07-15 20:56:59.122449] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.359 [2024-07-15 20:56:59.122613] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.359 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.620 malloc0 00:20:55.620 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.620 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:55.880 [2024-07-15 20:56:59.549308] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1622402 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1622402 /var/tmp/bdevperf.sock 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1622402 ']' 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.880 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.880 [2024-07-15 20:56:59.593404] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:55.880 [2024-07-15 20:56:59.593455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622402 ] 00:20:55.880 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.880 [2024-07-15 20:56:59.645289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.880 [2024-07-15 20:56:59.697860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.140 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.140 20:56:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.140 20:56:59 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:20:56.140 [2024-07-15 20:56:59.913532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.140 [2024-07-15 20:56:59.913602] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.140 TLSTESTn1 00:20:56.140 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:56.401 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:56.401 "subsystems": [ 00:20:56.401 { 00:20:56.401 "subsystem": "keyring", 00:20:56.401 "config": [] 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "subsystem": "iobuf", 00:20:56.401 "config": [ 00:20:56.401 { 00:20:56.401 "method": "iobuf_set_options", 00:20:56.401 "params": { 00:20:56.401 "small_pool_count": 8192, 00:20:56.401 "large_pool_count": 1024, 00:20:56.401 "small_bufsize": 8192, 00:20:56.401 "large_bufsize": 135168 00:20:56.401 } 00:20:56.401 } 00:20:56.401 ] 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "subsystem": "sock", 00:20:56.401 "config": [ 00:20:56.401 { 00:20:56.401 "method": "sock_set_default_impl", 00:20:56.401 "params": { 00:20:56.401 "impl_name": "posix" 00:20:56.401 } 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "method": "sock_impl_set_options", 00:20:56.401 "params": { 00:20:56.401 "impl_name": "ssl", 00:20:56.401 "recv_buf_size": 4096, 00:20:56.401 "send_buf_size": 4096, 00:20:56.401 "enable_recv_pipe": true, 00:20:56.401 "enable_quickack": false, 00:20:56.401 "enable_placement_id": 0, 00:20:56.401 "enable_zerocopy_send_server": true, 00:20:56.401 "enable_zerocopy_send_client": false, 00:20:56.401 "zerocopy_threshold": 0, 00:20:56.401 "tls_version": 0, 00:20:56.401 "enable_ktls": false 00:20:56.401 } 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "method": "sock_impl_set_options", 00:20:56.401 "params": { 00:20:56.401 "impl_name": "posix", 00:20:56.401 "recv_buf_size": 2097152, 00:20:56.401 "send_buf_size": 2097152, 00:20:56.401 "enable_recv_pipe": true, 00:20:56.401 "enable_quickack": false, 00:20:56.401 "enable_placement_id": 0, 00:20:56.401 "enable_zerocopy_send_server": true, 00:20:56.401 "enable_zerocopy_send_client": false, 00:20:56.401 "zerocopy_threshold": 0, 00:20:56.401 "tls_version": 0, 00:20:56.401 "enable_ktls": false 00:20:56.401 } 00:20:56.401 } 00:20:56.401 ] 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "subsystem": "vmd", 00:20:56.401 "config": [] 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "subsystem": "accel", 00:20:56.401 "config": [ 00:20:56.401 { 00:20:56.401 "method": "accel_set_options", 00:20:56.401 "params": { 00:20:56.401 "small_cache_size": 128, 00:20:56.401 "large_cache_size": 16, 00:20:56.401 "task_count": 2048, 00:20:56.401 "sequence_count": 2048, 00:20:56.401 "buf_count": 2048 00:20:56.401 } 00:20:56.401 } 00:20:56.401 ] 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "subsystem": "bdev", 00:20:56.401 "config": [ 00:20:56.401 { 00:20:56.401 "method": "bdev_set_options", 00:20:56.401 "params": { 00:20:56.401 "bdev_io_pool_size": 65535, 00:20:56.401 "bdev_io_cache_size": 256, 00:20:56.401 "bdev_auto_examine": true, 00:20:56.401 "iobuf_small_cache_size": 128, 00:20:56.401 "iobuf_large_cache_size": 16 00:20:56.401 } 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "method": "bdev_raid_set_options", 00:20:56.401 "params": { 00:20:56.401 "process_window_size_kb": 1024 00:20:56.401 } 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "method": "bdev_iscsi_set_options", 00:20:56.401 "params": { 00:20:56.401 "timeout_sec": 30 00:20:56.401 } 00:20:56.401 }, 00:20:56.401 { 00:20:56.401 "method": "bdev_nvme_set_options", 00:20:56.401 "params": { 00:20:56.401 "action_on_timeout": "none", 00:20:56.401 "timeout_us": 0, 00:20:56.401 "timeout_admin_us": 0, 00:20:56.401 "keep_alive_timeout_ms": 10000, 00:20:56.401 "arbitration_burst": 0, 00:20:56.401 "low_priority_weight": 0, 00:20:56.401 "medium_priority_weight": 0, 00:20:56.401 "high_priority_weight": 0, 00:20:56.401 "nvme_adminq_poll_period_us": 10000, 00:20:56.401 "nvme_ioq_poll_period_us": 0, 00:20:56.401 "io_queue_requests": 0, 00:20:56.401 "delay_cmd_submit": true, 00:20:56.401 "transport_retry_count": 4, 00:20:56.401 "bdev_retry_count": 3, 00:20:56.401 "transport_ack_timeout": 0, 00:20:56.401 "ctrlr_loss_timeout_sec": 0, 00:20:56.401 "reconnect_delay_sec": 0, 00:20:56.401 "fast_io_fail_timeout_sec": 0, 00:20:56.401 "disable_auto_failback": false, 00:20:56.401 "generate_uuids": false, 00:20:56.401 "transport_tos": 0, 00:20:56.401 "nvme_error_stat": false, 00:20:56.401 "rdma_srq_size": 0, 00:20:56.401 "io_path_stat": false, 00:20:56.402 "allow_accel_sequence": false, 00:20:56.402 "rdma_max_cq_size": 0, 00:20:56.402 "rdma_cm_event_timeout_ms": 0, 00:20:56.402 "dhchap_digests": [ 00:20:56.402 "sha256", 00:20:56.402 "sha384", 00:20:56.402 "sha512" 00:20:56.402 ], 00:20:56.402 "dhchap_dhgroups": [ 00:20:56.402 "null", 00:20:56.402 "ffdhe2048", 00:20:56.402 "ffdhe3072", 00:20:56.402 "ffdhe4096", 00:20:56.402 "ffdhe6144", 00:20:56.402 "ffdhe8192" 00:20:56.402 ] 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "bdev_nvme_set_hotplug", 00:20:56.402 "params": { 00:20:56.402 "period_us": 100000, 00:20:56.402 "enable": false 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "bdev_malloc_create", 00:20:56.402 "params": { 00:20:56.402 "name": "malloc0", 00:20:56.402 "num_blocks": 8192, 00:20:56.402 "block_size": 4096, 00:20:56.402 "physical_block_size": 4096, 00:20:56.402 "uuid": "053f9676-29f6-4a2c-bde8-0635baab294e", 00:20:56.402 "optimal_io_boundary": 0 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "bdev_wait_for_examine" 00:20:56.402 } 00:20:56.402 ] 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "subsystem": "nbd", 00:20:56.402 "config": [] 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "subsystem": "scheduler", 00:20:56.402 "config": [ 00:20:56.402 { 00:20:56.402 "method": "framework_set_scheduler", 00:20:56.402 "params": { 00:20:56.402 "name": "static" 00:20:56.402 } 00:20:56.402 } 00:20:56.402 ] 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "subsystem": "nvmf", 00:20:56.402 "config": [ 00:20:56.402 { 00:20:56.402 "method": "nvmf_set_config", 00:20:56.402 "params": { 00:20:56.402 "discovery_filter": "match_any", 00:20:56.402 "admin_cmd_passthru": { 00:20:56.402 "identify_ctrlr": false 00:20:56.402 } 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_set_max_subsystems", 00:20:56.402 "params": { 00:20:56.402 "max_subsystems": 1024 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_set_crdt", 00:20:56.402 "params": { 00:20:56.402 "crdt1": 0, 00:20:56.402 "crdt2": 0, 00:20:56.402 "crdt3": 0 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_create_transport", 00:20:56.402 "params": { 00:20:56.402 "trtype": "TCP", 00:20:56.402 "max_queue_depth": 128, 00:20:56.402 "max_io_qpairs_per_ctrlr": 127, 00:20:56.402 "in_capsule_data_size": 4096, 00:20:56.402 "max_io_size": 131072, 00:20:56.402 "io_unit_size": 131072, 00:20:56.402 "max_aq_depth": 128, 00:20:56.402 "num_shared_buffers": 511, 00:20:56.402 "buf_cache_size": 4294967295, 00:20:56.402 "dif_insert_or_strip": false, 00:20:56.402 "zcopy": false, 00:20:56.402 "c2h_success": false, 00:20:56.402 "sock_priority": 0, 00:20:56.402 "abort_timeout_sec": 1, 00:20:56.402 "ack_timeout": 0, 00:20:56.402 "data_wr_pool_size": 0 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_create_subsystem", 00:20:56.402 "params": { 00:20:56.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.402 "allow_any_host": false, 00:20:56.402 "serial_number": "SPDK00000000000001", 00:20:56.402 "model_number": "SPDK bdev Controller", 00:20:56.402 "max_namespaces": 10, 00:20:56.402 "min_cntlid": 1, 00:20:56.402 "max_cntlid": 65519, 00:20:56.402 "ana_reporting": false 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_subsystem_add_host", 00:20:56.402 "params": { 00:20:56.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.402 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.402 "psk": "/tmp/tmp.sPMzwTZbj9" 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_subsystem_add_ns", 00:20:56.402 "params": { 00:20:56.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.402 "namespace": { 00:20:56.402 "nsid": 1, 00:20:56.402 "bdev_name": "malloc0", 00:20:56.402 "nguid": "053F967629F64A2CBDE80635BAAB294E", 00:20:56.402 "uuid": "053f9676-29f6-4a2c-bde8-0635baab294e", 00:20:56.402 "no_auto_visible": false 00:20:56.402 } 00:20:56.402 } 00:20:56.402 }, 00:20:56.402 { 00:20:56.402 "method": "nvmf_subsystem_add_listener", 00:20:56.402 "params": { 00:20:56.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.402 "listen_address": { 00:20:56.402 "trtype": "TCP", 00:20:56.402 "adrfam": "IPv4", 00:20:56.402 "traddr": "10.0.0.2", 00:20:56.402 "trsvcid": "4420" 00:20:56.402 }, 00:20:56.402 "secure_channel": true 00:20:56.402 } 00:20:56.402 } 00:20:56.402 ] 00:20:56.402 } 00:20:56.402 ] 00:20:56.402 }' 00:20:56.402 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:56.664 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:56.664 "subsystems": [ 00:20:56.664 { 00:20:56.664 "subsystem": "keyring", 00:20:56.664 "config": [] 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "subsystem": "iobuf", 00:20:56.664 "config": [ 00:20:56.664 { 00:20:56.664 "method": "iobuf_set_options", 00:20:56.664 "params": { 00:20:56.664 "small_pool_count": 8192, 00:20:56.664 "large_pool_count": 1024, 00:20:56.664 "small_bufsize": 8192, 00:20:56.664 "large_bufsize": 135168 00:20:56.664 } 00:20:56.664 } 00:20:56.664 ] 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "subsystem": "sock", 00:20:56.664 "config": [ 00:20:56.664 { 00:20:56.664 "method": "sock_set_default_impl", 00:20:56.664 "params": { 00:20:56.664 "impl_name": "posix" 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "method": "sock_impl_set_options", 00:20:56.664 "params": { 00:20:56.664 "impl_name": "ssl", 00:20:56.664 "recv_buf_size": 4096, 00:20:56.664 "send_buf_size": 4096, 00:20:56.664 "enable_recv_pipe": true, 00:20:56.664 "enable_quickack": false, 00:20:56.664 "enable_placement_id": 0, 00:20:56.664 "enable_zerocopy_send_server": true, 00:20:56.664 "enable_zerocopy_send_client": false, 00:20:56.664 "zerocopy_threshold": 0, 00:20:56.664 "tls_version": 0, 00:20:56.664 "enable_ktls": false 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "method": "sock_impl_set_options", 00:20:56.664 "params": { 00:20:56.664 "impl_name": "posix", 00:20:56.664 "recv_buf_size": 2097152, 00:20:56.664 "send_buf_size": 2097152, 00:20:56.664 "enable_recv_pipe": true, 00:20:56.664 "enable_quickack": false, 00:20:56.664 "enable_placement_id": 0, 00:20:56.664 "enable_zerocopy_send_server": true, 00:20:56.664 "enable_zerocopy_send_client": false, 00:20:56.664 "zerocopy_threshold": 0, 00:20:56.664 "tls_version": 0, 00:20:56.664 "enable_ktls": false 00:20:56.664 } 00:20:56.664 } 00:20:56.664 ] 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "subsystem": "vmd", 00:20:56.664 "config": [] 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "subsystem": "accel", 00:20:56.664 "config": [ 00:20:56.664 { 00:20:56.664 "method": "accel_set_options", 00:20:56.664 "params": { 00:20:56.664 "small_cache_size": 128, 00:20:56.664 "large_cache_size": 16, 00:20:56.664 "task_count": 2048, 00:20:56.664 "sequence_count": 2048, 00:20:56.664 "buf_count": 2048 00:20:56.664 } 00:20:56.664 } 00:20:56.664 ] 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "subsystem": "bdev", 00:20:56.664 "config": [ 00:20:56.664 { 00:20:56.664 "method": "bdev_set_options", 00:20:56.664 "params": { 00:20:56.664 "bdev_io_pool_size": 65535, 00:20:56.664 "bdev_io_cache_size": 256, 00:20:56.664 "bdev_auto_examine": true, 00:20:56.664 "iobuf_small_cache_size": 128, 00:20:56.664 "iobuf_large_cache_size": 16 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "method": "bdev_raid_set_options", 00:20:56.664 "params": { 00:20:56.664 "process_window_size_kb": 1024 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "method": "bdev_iscsi_set_options", 00:20:56.664 "params": { 00:20:56.664 "timeout_sec": 30 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "method": "bdev_nvme_set_options", 00:20:56.664 "params": { 00:20:56.664 "action_on_timeout": "none", 00:20:56.664 "timeout_us": 0, 00:20:56.664 "timeout_admin_us": 0, 00:20:56.664 "keep_alive_timeout_ms": 10000, 00:20:56.664 "arbitration_burst": 0, 00:20:56.664 "low_priority_weight": 0, 00:20:56.664 "medium_priority_weight": 0, 00:20:56.664 "high_priority_weight": 0, 00:20:56.664 "nvme_adminq_poll_period_us": 10000, 00:20:56.664 "nvme_ioq_poll_period_us": 0, 00:20:56.664 "io_queue_requests": 512, 00:20:56.664 "delay_cmd_submit": true, 00:20:56.664 "transport_retry_count": 4, 00:20:56.664 "bdev_retry_count": 3, 00:20:56.664 "transport_ack_timeout": 0, 00:20:56.664 "ctrlr_loss_timeout_sec": 0, 00:20:56.664 "reconnect_delay_sec": 0, 00:20:56.664 "fast_io_fail_timeout_sec": 0, 00:20:56.664 "disable_auto_failback": false, 00:20:56.664 "generate_uuids": false, 00:20:56.664 "transport_tos": 0, 00:20:56.664 "nvme_error_stat": false, 00:20:56.664 "rdma_srq_size": 0, 00:20:56.664 "io_path_stat": false, 00:20:56.664 "allow_accel_sequence": false, 00:20:56.664 "rdma_max_cq_size": 0, 00:20:56.664 "rdma_cm_event_timeout_ms": 0, 00:20:56.664 "dhchap_digests": [ 00:20:56.664 "sha256", 00:20:56.664 "sha384", 00:20:56.664 "sha512" 00:20:56.665 ], 00:20:56.665 "dhchap_dhgroups": [ 00:20:56.665 "null", 00:20:56.665 "ffdhe2048", 00:20:56.665 "ffdhe3072", 00:20:56.665 "ffdhe4096", 00:20:56.665 "ffdhe6144", 00:20:56.665 "ffdhe8192" 00:20:56.665 ] 00:20:56.665 } 00:20:56.665 }, 00:20:56.665 { 00:20:56.665 "method": "bdev_nvme_attach_controller", 00:20:56.665 "params": { 00:20:56.665 "name": "TLSTEST", 00:20:56.665 "trtype": "TCP", 00:20:56.665 "adrfam": "IPv4", 00:20:56.665 "traddr": "10.0.0.2", 00:20:56.665 "trsvcid": "4420", 00:20:56.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.665 "prchk_reftag": false, 00:20:56.665 "prchk_guard": false, 00:20:56.665 "ctrlr_loss_timeout_sec": 0, 00:20:56.665 "reconnect_delay_sec": 0, 00:20:56.665 "fast_io_fail_timeout_sec": 0, 00:20:56.665 "psk": "/tmp/tmp.sPMzwTZbj9", 00:20:56.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.665 "hdgst": false, 00:20:56.665 "ddgst": false 00:20:56.665 } 00:20:56.665 }, 00:20:56.665 { 00:20:56.665 "method": "bdev_nvme_set_hotplug", 00:20:56.665 "params": { 00:20:56.665 "period_us": 100000, 00:20:56.665 "enable": false 00:20:56.665 } 00:20:56.665 }, 00:20:56.665 { 00:20:56.665 "method": "bdev_wait_for_examine" 00:20:56.665 } 00:20:56.665 ] 00:20:56.665 }, 00:20:56.665 { 00:20:56.665 "subsystem": "nbd", 00:20:56.665 "config": [] 00:20:56.665 } 00:20:56.665 ] 00:20:56.665 }' 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1622402 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1622402 ']' 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1622402 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622402 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622402' 00:20:56.665 killing process with pid 1622402 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1622402 00:20:56.665 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.665 00:20:56.665 Latency(us) 00:20:56.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.665 =================================================================================================================== 00:20:56.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.665 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1622402 00:20:56.665 [2024-07-15 20:57:00.549883] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1622055 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1622055 ']' 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1622055 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622055 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622055' 00:20:56.927 killing process with pid 1622055 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1622055 00:20:56.927 [2024-07-15 20:57:00.712946] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:56.927 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1622055 00:20:57.188 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:57.188 20:57:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.188 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.188 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.188 20:57:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:57.188 "subsystems": [ 00:20:57.188 { 00:20:57.188 "subsystem": "keyring", 00:20:57.189 "config": [] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "iobuf", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "iobuf_set_options", 00:20:57.189 "params": { 00:20:57.189 "small_pool_count": 8192, 00:20:57.189 "large_pool_count": 1024, 00:20:57.189 "small_bufsize": 8192, 00:20:57.189 "large_bufsize": 135168 00:20:57.189 } 00:20:57.189 } 00:20:57.189 ] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "sock", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "sock_set_default_impl", 00:20:57.189 "params": { 00:20:57.189 "impl_name": "posix" 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "sock_impl_set_options", 00:20:57.189 "params": { 00:20:57.189 "impl_name": "ssl", 00:20:57.189 "recv_buf_size": 4096, 00:20:57.189 "send_buf_size": 4096, 00:20:57.189 "enable_recv_pipe": true, 00:20:57.189 "enable_quickack": false, 00:20:57.189 "enable_placement_id": 0, 00:20:57.189 "enable_zerocopy_send_server": true, 00:20:57.189 "enable_zerocopy_send_client": false, 00:20:57.189 "zerocopy_threshold": 0, 00:20:57.189 "tls_version": 0, 00:20:57.189 "enable_ktls": false 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "sock_impl_set_options", 00:20:57.189 "params": { 00:20:57.189 "impl_name": "posix", 00:20:57.189 "recv_buf_size": 2097152, 00:20:57.189 "send_buf_size": 2097152, 00:20:57.189 "enable_recv_pipe": true, 00:20:57.189 "enable_quickack": false, 00:20:57.189 "enable_placement_id": 0, 00:20:57.189 "enable_zerocopy_send_server": true, 00:20:57.189 "enable_zerocopy_send_client": false, 00:20:57.189 "zerocopy_threshold": 0, 00:20:57.189 "tls_version": 0, 00:20:57.189 "enable_ktls": false 00:20:57.189 } 00:20:57.189 } 00:20:57.189 ] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "vmd", 00:20:57.189 "config": [] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "accel", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "accel_set_options", 00:20:57.189 "params": { 00:20:57.189 "small_cache_size": 128, 00:20:57.189 "large_cache_size": 16, 00:20:57.189 "task_count": 2048, 00:20:57.189 "sequence_count": 2048, 00:20:57.189 "buf_count": 2048 00:20:57.189 } 00:20:57.189 } 00:20:57.189 ] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "bdev", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "bdev_set_options", 00:20:57.189 "params": { 00:20:57.189 "bdev_io_pool_size": 65535, 00:20:57.189 "bdev_io_cache_size": 256, 00:20:57.189 "bdev_auto_examine": true, 00:20:57.189 "iobuf_small_cache_size": 128, 00:20:57.189 "iobuf_large_cache_size": 16 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_raid_set_options", 00:20:57.189 "params": { 00:20:57.189 "process_window_size_kb": 1024 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_iscsi_set_options", 00:20:57.189 "params": { 00:20:57.189 "timeout_sec": 30 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_nvme_set_options", 00:20:57.189 "params": { 00:20:57.189 "action_on_timeout": "none", 00:20:57.189 "timeout_us": 0, 00:20:57.189 "timeout_admin_us": 0, 00:20:57.189 "keep_alive_timeout_ms": 10000, 00:20:57.189 "arbitration_burst": 0, 00:20:57.189 "low_priority_weight": 0, 00:20:57.189 "medium_priority_weight": 0, 00:20:57.189 "high_priority_weight": 0, 00:20:57.189 "nvme_adminq_poll_period_us": 10000, 00:20:57.189 "nvme_ioq_poll_period_us": 0, 00:20:57.189 "io_queue_requests": 0, 00:20:57.189 "delay_cmd_submit": true, 00:20:57.189 "transport_retry_count": 4, 00:20:57.189 "bdev_retry_count": 3, 00:20:57.189 "transport_ack_timeout": 0, 00:20:57.189 "ctrlr_loss_timeout_sec": 0, 00:20:57.189 "reconnect_delay_sec": 0, 00:20:57.189 "fast_io_fail_timeout_sec": 0, 00:20:57.189 "disable_auto_failback": false, 00:20:57.189 "generate_uuids": false, 00:20:57.189 "transport_tos": 0, 00:20:57.189 "nvme_error_stat": false, 00:20:57.189 "rdma_srq_size": 0, 00:20:57.189 "io_path_stat": false, 00:20:57.189 "allow_accel_sequence": false, 00:20:57.189 "rdma_max_cq_size": 0, 00:20:57.189 "rdma_cm_event_timeout_ms": 0, 00:20:57.189 "dhchap_digests": [ 00:20:57.189 "sha256", 00:20:57.189 "sha384", 00:20:57.189 "sha512" 00:20:57.189 ], 00:20:57.189 "dhchap_dhgroups": [ 00:20:57.189 "null", 00:20:57.189 "ffdhe2048", 00:20:57.189 "ffdhe3072", 00:20:57.189 "ffdhe4096", 00:20:57.189 "ffdhe6144", 00:20:57.189 "ffdhe8192" 00:20:57.189 ] 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_nvme_set_hotplug", 00:20:57.189 "params": { 00:20:57.189 "period_us": 100000, 00:20:57.189 "enable": false 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_malloc_create", 00:20:57.189 "params": { 00:20:57.189 "name": "malloc0", 00:20:57.189 "num_blocks": 8192, 00:20:57.189 "block_size": 4096, 00:20:57.189 "physical_block_size": 4096, 00:20:57.189 "uuid": "053f9676-29f6-4a2c-bde8-0635baab294e", 00:20:57.189 "optimal_io_boundary": 0 00:20:57.189 } 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "method": "bdev_wait_for_examine" 00:20:57.189 } 00:20:57.189 ] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "nbd", 00:20:57.189 "config": [] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "scheduler", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "framework_set_scheduler", 00:20:57.189 "params": { 00:20:57.189 "name": "static" 00:20:57.189 } 00:20:57.189 } 00:20:57.189 ] 00:20:57.189 }, 00:20:57.189 { 00:20:57.189 "subsystem": "nvmf", 00:20:57.189 "config": [ 00:20:57.189 { 00:20:57.189 "method": "nvmf_set_config", 00:20:57.189 "params": { 00:20:57.189 "discovery_filter": "match_any", 00:20:57.189 "admin_cmd_passthru": { 00:20:57.189 "identify_ctrlr": false 00:20:57.189 } 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_set_max_subsystems", 00:20:57.190 "params": { 00:20:57.190 "max_subsystems": 1024 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_set_crdt", 00:20:57.190 "params": { 00:20:57.190 "crdt1": 0, 00:20:57.190 "crdt2": 0, 00:20:57.190 "crdt3": 0 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_create_transport", 00:20:57.190 "params": { 00:20:57.190 "trtype": "TCP", 00:20:57.190 "max_queue_depth": 128, 00:20:57.190 "max_io_qpairs_per_ctrlr": 127, 00:20:57.190 "in_capsule_data_size": 4096, 00:20:57.190 "max_io_size": 131072, 00:20:57.190 "io_unit_size": 131072, 00:20:57.190 "max_aq_depth": 128, 00:20:57.190 "num_shared_buffers": 511, 00:20:57.190 "buf_cache_size": 4294967295, 00:20:57.190 "dif_insert_or_strip": false, 00:20:57.190 "zcopy": false, 00:20:57.190 "c2h_success": false, 00:20:57.190 "sock_priority": 0, 00:20:57.190 "abort_timeout_sec": 1, 00:20:57.190 "ack_timeout": 0, 00:20:57.190 "data_wr_pool_size": 0 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_create_subsystem", 00:20:57.190 "params": { 00:20:57.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.190 "allow_any_host": false, 00:20:57.190 "serial_number": "SPDK00000000000001", 00:20:57.190 "model_number": "SPDK bdev Controller", 00:20:57.190 "max_namespaces": 10, 00:20:57.190 "min_cntlid": 1, 00:20:57.190 "max_cntlid": 65519, 00:20:57.190 "ana_reporting": false 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_subsystem_add_host", 00:20:57.190 "params": { 00:20:57.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.190 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.190 "psk": "/tmp/tmp.sPMzwTZbj9" 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_subsystem_add_ns", 00:20:57.190 "params": { 00:20:57.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.190 "namespace": { 00:20:57.190 "nsid": 1, 00:20:57.190 "bdev_name": "malloc0", 00:20:57.190 "nguid": "053F967629F64A2CBDE80635BAAB294E", 00:20:57.190 "uuid": "053f9676-29f6-4a2c-bde8-0635baab294e", 00:20:57.190 "no_auto_visible": false 00:20:57.190 } 00:20:57.190 } 00:20:57.190 }, 00:20:57.190 { 00:20:57.190 "method": "nvmf_subsystem_add_listener", 00:20:57.190 "params": { 00:20:57.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.190 "listen_address": { 00:20:57.190 "trtype": "TCP", 00:20:57.190 "adrfam": "IPv4", 00:20:57.190 "traddr": "10.0.0.2", 00:20:57.190 "trsvcid": "4420" 00:20:57.190 }, 00:20:57.190 "secure_channel": true 00:20:57.190 } 00:20:57.190 } 00:20:57.190 ] 00:20:57.190 } 00:20:57.190 ] 00:20:57.190 }' 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1622611 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1622611 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1622611 ']' 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.190 20:57:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.190 [2024-07-15 20:57:00.890894] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:57.190 [2024-07-15 20:57:00.890949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.190 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.190 [2024-07-15 20:57:00.973036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.190 [2024-07-15 20:57:01.027141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.190 [2024-07-15 20:57:01.027171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.190 [2024-07-15 20:57:01.027177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.190 [2024-07-15 20:57:01.027181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.190 [2024-07-15 20:57:01.027185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.190 [2024-07-15 20:57:01.027230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.451 [2024-07-15 20:57:01.210777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.451 [2024-07-15 20:57:01.226753] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.451 [2024-07-15 20:57:01.242804] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.451 [2024-07-15 20:57:01.257262] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1623001 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1623001 /var/tmp/bdevperf.sock 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1623001 ']' 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.023 20:57:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:58.023 "subsystems": [ 00:20:58.023 { 00:20:58.023 "subsystem": "keyring", 00:20:58.023 "config": [] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "iobuf", 00:20:58.023 "config": [ 00:20:58.023 { 00:20:58.023 "method": "iobuf_set_options", 00:20:58.023 "params": { 00:20:58.023 "small_pool_count": 8192, 00:20:58.023 "large_pool_count": 1024, 00:20:58.023 "small_bufsize": 8192, 00:20:58.023 "large_bufsize": 135168 00:20:58.023 } 00:20:58.023 } 00:20:58.023 ] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "sock", 00:20:58.023 "config": [ 00:20:58.023 { 00:20:58.023 "method": "sock_set_default_impl", 00:20:58.023 "params": { 00:20:58.023 "impl_name": "posix" 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "sock_impl_set_options", 00:20:58.023 "params": { 00:20:58.023 "impl_name": "ssl", 00:20:58.023 "recv_buf_size": 4096, 00:20:58.023 "send_buf_size": 4096, 00:20:58.023 "enable_recv_pipe": true, 00:20:58.023 "enable_quickack": false, 00:20:58.023 "enable_placement_id": 0, 00:20:58.023 "enable_zerocopy_send_server": true, 00:20:58.023 "enable_zerocopy_send_client": false, 00:20:58.023 "zerocopy_threshold": 0, 00:20:58.023 "tls_version": 0, 00:20:58.023 "enable_ktls": false 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "sock_impl_set_options", 00:20:58.023 "params": { 00:20:58.023 "impl_name": "posix", 00:20:58.023 "recv_buf_size": 2097152, 00:20:58.023 "send_buf_size": 2097152, 00:20:58.023 "enable_recv_pipe": true, 00:20:58.023 "enable_quickack": false, 00:20:58.023 "enable_placement_id": 0, 00:20:58.023 "enable_zerocopy_send_server": true, 00:20:58.023 "enable_zerocopy_send_client": false, 00:20:58.023 "zerocopy_threshold": 0, 00:20:58.023 "tls_version": 0, 00:20:58.023 "enable_ktls": false 00:20:58.023 } 00:20:58.023 } 00:20:58.023 ] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "vmd", 00:20:58.023 "config": [] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "accel", 00:20:58.023 "config": [ 00:20:58.023 { 00:20:58.023 "method": "accel_set_options", 00:20:58.023 "params": { 00:20:58.023 "small_cache_size": 128, 00:20:58.023 "large_cache_size": 16, 00:20:58.023 "task_count": 2048, 00:20:58.023 "sequence_count": 2048, 00:20:58.023 "buf_count": 2048 00:20:58.023 } 00:20:58.023 } 00:20:58.023 ] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "bdev", 00:20:58.023 "config": [ 00:20:58.023 { 00:20:58.023 "method": "bdev_set_options", 00:20:58.023 "params": { 00:20:58.023 "bdev_io_pool_size": 65535, 00:20:58.023 "bdev_io_cache_size": 256, 00:20:58.023 "bdev_auto_examine": true, 00:20:58.023 "iobuf_small_cache_size": 128, 00:20:58.023 "iobuf_large_cache_size": 16 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_raid_set_options", 00:20:58.023 "params": { 00:20:58.023 "process_window_size_kb": 1024 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_iscsi_set_options", 00:20:58.023 "params": { 00:20:58.023 "timeout_sec": 30 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_nvme_set_options", 00:20:58.023 "params": { 00:20:58.023 "action_on_timeout": "none", 00:20:58.023 "timeout_us": 0, 00:20:58.023 "timeout_admin_us": 0, 00:20:58.023 "keep_alive_timeout_ms": 10000, 00:20:58.023 "arbitration_burst": 0, 00:20:58.023 "low_priority_weight": 0, 00:20:58.023 "medium_priority_weight": 0, 00:20:58.023 "high_priority_weight": 0, 00:20:58.023 "nvme_adminq_poll_period_us": 10000, 00:20:58.023 "nvme_ioq_poll_period_us": 0, 00:20:58.023 "io_queue_requests": 512, 00:20:58.023 "delay_cmd_submit": true, 00:20:58.023 "transport_retry_count": 4, 00:20:58.023 "bdev_retry_count": 3, 00:20:58.023 "transport_ack_timeout": 0, 00:20:58.023 "ctrlr_loss_timeout_sec": 0, 00:20:58.023 "reconnect_delay_sec": 0, 00:20:58.023 "fast_io_fail_timeout_sec": 0, 00:20:58.023 "disable_auto_failback": false, 00:20:58.023 "generate_uuids": false, 00:20:58.023 "transport_tos": 0, 00:20:58.023 "nvme_error_stat": false, 00:20:58.023 "rdma_srq_size": 0, 00:20:58.023 "io_path_stat": false, 00:20:58.023 "allow_accel_sequence": false, 00:20:58.023 "rdma_max_cq_size": 0, 00:20:58.023 "rdma_cm_event_timeout_ms": 0, 00:20:58.023 "dhchap_digests": [ 00:20:58.023 "sha256", 00:20:58.023 "sha384", 00:20:58.023 "sha512" 00:20:58.023 ], 00:20:58.023 "dhchap_dhgroups": [ 00:20:58.023 "null", 00:20:58.023 "ffdhe2048", 00:20:58.023 "ffdhe3072", 00:20:58.023 "ffdhe4096", 00:20:58.023 "ffdhe6144", 00:20:58.023 "ffdhe8192" 00:20:58.023 ] 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_nvme_attach_controller", 00:20:58.023 "params": { 00:20:58.023 "name": "TLSTEST", 00:20:58.023 "trtype": "TCP", 00:20:58.023 "adrfam": "IPv4", 00:20:58.023 "traddr": "10.0.0.2", 00:20:58.023 "trsvcid": "4420", 00:20:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.023 "prchk_reftag": false, 00:20:58.023 "prchk_guard": false, 00:20:58.023 "ctrlr_loss_timeout_sec": 0, 00:20:58.023 "reconnect_delay_sec": 0, 00:20:58.023 "fast_io_fail_timeout_sec": 0, 00:20:58.023 "psk": "/tmp/tmp.sPMzwTZbj9", 00:20:58.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.023 "hdgst": false, 00:20:58.023 "ddgst": false 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_nvme_set_hotplug", 00:20:58.023 "params": { 00:20:58.023 "period_us": 100000, 00:20:58.023 "enable": false 00:20:58.023 } 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "method": "bdev_wait_for_examine" 00:20:58.023 } 00:20:58.023 ] 00:20:58.023 }, 00:20:58.023 { 00:20:58.023 "subsystem": "nbd", 00:20:58.023 "config": [] 00:20:58.023 } 00:20:58.023 ] 00:20:58.023 }' 00:20:58.023 [2024-07-15 20:57:01.741738] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:20:58.023 [2024-07-15 20:57:01.741793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623001 ] 00:20:58.023 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.023 [2024-07-15 20:57:01.791992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.023 [2024-07-15 20:57:01.845323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.284 [2024-07-15 20:57:01.969932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.284 [2024-07-15 20:57:01.969995] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.854 20:57:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.854 20:57:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.854 20:57:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:58.854 Running I/O for 10 seconds... 00:21:08.873 00:21:08.874 Latency(us) 00:21:08.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.874 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.874 Verification LBA range: start 0x0 length 0x2000 00:21:08.874 TLSTESTn1 : 10.05 2665.67 10.41 0.00 0.00 47888.16 5952.85 59419.31 00:21:08.874 =================================================================================================================== 00:21:08.874 Total : 2665.67 10.41 0.00 0.00 47888.16 5952.85 59419.31 00:21:08.874 0 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1623001 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1623001 ']' 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1623001 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623001 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623001' 00:21:08.874 killing process with pid 1623001 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1623001 00:21:08.874 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.874 00:21:08.874 Latency(us) 00:21:08.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.874 =================================================================================================================== 00:21:08.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.874 [2024-07-15 20:57:12.745170] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:08.874 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1623001 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1622611 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1622611 ']' 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1622611 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622611 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622611' 00:21:09.135 killing process with pid 1622611 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1622611 00:21:09.135 [2024-07-15 20:57:12.914138] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:09.135 20:57:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1622611 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1625516 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1625516 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1625516 ']' 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.397 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.397 [2024-07-15 20:57:13.094684] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:09.397 [2024-07-15 20:57:13.094737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.397 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.397 [2024-07-15 20:57:13.161961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.397 [2024-07-15 20:57:13.224963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.397 [2024-07-15 20:57:13.225001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.397 [2024-07-15 20:57:13.225008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.397 [2024-07-15 20:57:13.225015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.397 [2024-07-15 20:57:13.225020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.397 [2024-07-15 20:57:13.225046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.657 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.sPMzwTZbj9 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sPMzwTZbj9 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.658 [2024-07-15 20:57:13.494457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.658 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.918 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.178 [2024-07-15 20:57:13.835298] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.178 [2024-07-15 20:57:13.835489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.178 20:57:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.178 malloc0 00:21:10.178 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.439 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sPMzwTZbj9 00:21:10.700 [2024-07-15 20:57:14.335302] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1625850 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1625850 /var/tmp/bdevperf.sock 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1625850 ']' 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.700 [2024-07-15 20:57:14.395101] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:10.700 [2024-07-15 20:57:14.395146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625850 ] 00:21:10.700 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.700 [2024-07-15 20:57:14.435958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.700 [2024-07-15 20:57:14.489382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:10.700 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sPMzwTZbj9 00:21:10.961 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:10.961 [2024-07-15 20:57:14.846007] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.223 nvme0n1 00:21:11.223 20:57:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.223 Running I/O for 1 seconds... 00:21:12.609 00:21:12.609 Latency(us) 00:21:12.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.609 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.609 Verification LBA range: start 0x0 length 0x2000 00:21:12.609 nvme0n1 : 1.05 1893.95 7.40 0.00 0.00 66173.27 6089.39 124955.31 00:21:12.609 =================================================================================================================== 00:21:12.609 Total : 1893.95 7.40 0.00 0.00 66173.27 6089.39 124955.31 00:21:12.609 0 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1625850 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1625850 ']' 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1625850 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1625850 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1625850' 00:21:12.609 killing process with pid 1625850 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1625850 00:21:12.609 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.609 00:21:12.609 Latency(us) 00:21:12.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.609 =================================================================================================================== 00:21:12.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1625850 00:21:12.609 20:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1625516 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1625516 ']' 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1625516 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1625516 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1625516' 00:21:12.610 killing process with pid 1625516 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1625516 00:21:12.610 [2024-07-15 20:57:16.297439] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1625516 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1626210 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1626210 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1626210 ']' 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.610 20:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.610 [2024-07-15 20:57:16.495134] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:12.610 [2024-07-15 20:57:16.495187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.871 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.871 [2024-07-15 20:57:16.560675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.871 [2024-07-15 20:57:16.623247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.871 [2024-07-15 20:57:16.623285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.871 [2024-07-15 20:57:16.623292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.871 [2024-07-15 20:57:16.623298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.871 [2024-07-15 20:57:16.623304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.871 [2024-07-15 20:57:16.623324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.442 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.442 [2024-07-15 20:57:17.317917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.442 malloc0 00:21:13.703 [2024-07-15 20:57:17.344700] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.703 [2024-07-15 20:57:17.344892] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1626550 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1626550 /var/tmp/bdevperf.sock 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1626550 ']' 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.703 20:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.703 [2024-07-15 20:57:17.422599] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:13.703 [2024-07-15 20:57:17.422644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626550 ] 00:21:13.703 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.703 [2024-07-15 20:57:17.497123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.703 [2024-07-15 20:57:17.550918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.645 20:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.645 20:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:14.645 20:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sPMzwTZbj9 00:21:14.646 20:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:14.646 [2024-07-15 20:57:18.497035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.907 nvme0n1 00:21:14.907 20:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.907 Running I/O for 1 seconds... 00:21:16.288 00:21:16.288 Latency(us) 00:21:16.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.288 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.288 Verification LBA range: start 0x0 length 0x2000 00:21:16.288 nvme0n1 : 1.07 2165.78 8.46 0.00 0.00 57414.15 6144.00 121460.05 00:21:16.288 =================================================================================================================== 00:21:16.288 Total : 2165.78 8.46 0.00 0.00 57414.15 6144.00 121460.05 00:21:16.288 0 00:21:16.288 20:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:16.288 20:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.288 20:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.288 20:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.288 20:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:16.288 "subsystems": [ 00:21:16.288 { 00:21:16.288 "subsystem": "keyring", 00:21:16.288 "config": [ 00:21:16.288 { 00:21:16.288 "method": "keyring_file_add_key", 00:21:16.288 "params": { 00:21:16.288 "name": "key0", 00:21:16.288 "path": "/tmp/tmp.sPMzwTZbj9" 00:21:16.288 } 00:21:16.288 } 00:21:16.288 ] 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "subsystem": "iobuf", 00:21:16.288 "config": [ 00:21:16.288 { 00:21:16.288 "method": "iobuf_set_options", 00:21:16.288 "params": { 00:21:16.288 "small_pool_count": 8192, 00:21:16.288 "large_pool_count": 1024, 00:21:16.288 "small_bufsize": 8192, 00:21:16.288 "large_bufsize": 135168 00:21:16.288 } 00:21:16.288 } 00:21:16.288 ] 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "subsystem": "sock", 00:21:16.288 "config": [ 00:21:16.288 { 00:21:16.288 "method": "sock_set_default_impl", 00:21:16.288 "params": { 00:21:16.288 "impl_name": "posix" 00:21:16.288 } 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "method": "sock_impl_set_options", 00:21:16.288 "params": { 00:21:16.288 "impl_name": "ssl", 00:21:16.288 "recv_buf_size": 4096, 00:21:16.288 "send_buf_size": 4096, 00:21:16.288 "enable_recv_pipe": true, 00:21:16.288 "enable_quickack": false, 00:21:16.288 "enable_placement_id": 0, 00:21:16.288 "enable_zerocopy_send_server": true, 00:21:16.288 "enable_zerocopy_send_client": false, 00:21:16.288 "zerocopy_threshold": 0, 00:21:16.288 "tls_version": 0, 00:21:16.288 "enable_ktls": false 00:21:16.288 } 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "method": "sock_impl_set_options", 00:21:16.288 "params": { 00:21:16.288 "impl_name": "posix", 00:21:16.288 "recv_buf_size": 2097152, 00:21:16.288 "send_buf_size": 2097152, 00:21:16.288 "enable_recv_pipe": true, 00:21:16.288 "enable_quickack": false, 00:21:16.288 "enable_placement_id": 0, 00:21:16.288 "enable_zerocopy_send_server": true, 00:21:16.288 "enable_zerocopy_send_client": false, 00:21:16.288 "zerocopy_threshold": 0, 00:21:16.288 "tls_version": 0, 00:21:16.288 "enable_ktls": false 00:21:16.288 } 00:21:16.288 } 00:21:16.288 ] 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "subsystem": "vmd", 00:21:16.288 "config": [] 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "subsystem": "accel", 00:21:16.288 "config": [ 00:21:16.288 { 00:21:16.288 "method": "accel_set_options", 00:21:16.288 "params": { 00:21:16.288 "small_cache_size": 128, 00:21:16.288 "large_cache_size": 16, 00:21:16.288 "task_count": 2048, 00:21:16.288 "sequence_count": 2048, 00:21:16.288 "buf_count": 2048 00:21:16.288 } 00:21:16.288 } 00:21:16.288 ] 00:21:16.288 }, 00:21:16.288 { 00:21:16.288 "subsystem": "bdev", 00:21:16.288 "config": [ 00:21:16.288 { 00:21:16.289 "method": "bdev_set_options", 00:21:16.289 "params": { 00:21:16.289 "bdev_io_pool_size": 65535, 00:21:16.289 "bdev_io_cache_size": 256, 00:21:16.289 "bdev_auto_examine": true, 00:21:16.289 "iobuf_small_cache_size": 128, 00:21:16.289 "iobuf_large_cache_size": 16 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_raid_set_options", 00:21:16.289 "params": { 00:21:16.289 "process_window_size_kb": 1024 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_iscsi_set_options", 00:21:16.289 "params": { 00:21:16.289 "timeout_sec": 30 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_nvme_set_options", 00:21:16.289 "params": { 00:21:16.289 "action_on_timeout": "none", 00:21:16.289 "timeout_us": 0, 00:21:16.289 "timeout_admin_us": 0, 00:21:16.289 "keep_alive_timeout_ms": 10000, 00:21:16.289 "arbitration_burst": 0, 00:21:16.289 "low_priority_weight": 0, 00:21:16.289 "medium_priority_weight": 0, 00:21:16.289 "high_priority_weight": 0, 00:21:16.289 "nvme_adminq_poll_period_us": 10000, 00:21:16.289 "nvme_ioq_poll_period_us": 0, 00:21:16.289 "io_queue_requests": 0, 00:21:16.289 "delay_cmd_submit": true, 00:21:16.289 "transport_retry_count": 4, 00:21:16.289 "bdev_retry_count": 3, 00:21:16.289 "transport_ack_timeout": 0, 00:21:16.289 "ctrlr_loss_timeout_sec": 0, 00:21:16.289 "reconnect_delay_sec": 0, 00:21:16.289 "fast_io_fail_timeout_sec": 0, 00:21:16.289 "disable_auto_failback": false, 00:21:16.289 "generate_uuids": false, 00:21:16.289 "transport_tos": 0, 00:21:16.289 "nvme_error_stat": false, 00:21:16.289 "rdma_srq_size": 0, 00:21:16.289 "io_path_stat": false, 00:21:16.289 "allow_accel_sequence": false, 00:21:16.289 "rdma_max_cq_size": 0, 00:21:16.289 "rdma_cm_event_timeout_ms": 0, 00:21:16.289 "dhchap_digests": [ 00:21:16.289 "sha256", 00:21:16.289 "sha384", 00:21:16.289 "sha512" 00:21:16.289 ], 00:21:16.289 "dhchap_dhgroups": [ 00:21:16.289 "null", 00:21:16.289 "ffdhe2048", 00:21:16.289 "ffdhe3072", 00:21:16.289 "ffdhe4096", 00:21:16.289 "ffdhe6144", 00:21:16.289 "ffdhe8192" 00:21:16.289 ] 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_nvme_set_hotplug", 00:21:16.289 "params": { 00:21:16.289 "period_us": 100000, 00:21:16.289 "enable": false 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_malloc_create", 00:21:16.289 "params": { 00:21:16.289 "name": "malloc0", 00:21:16.289 "num_blocks": 8192, 00:21:16.289 "block_size": 4096, 00:21:16.289 "physical_block_size": 4096, 00:21:16.289 "uuid": "937bed9a-995e-4aa2-91eb-23f8d487c7c9", 00:21:16.289 "optimal_io_boundary": 0 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "bdev_wait_for_examine" 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "nbd", 00:21:16.289 "config": [] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "scheduler", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "framework_set_scheduler", 00:21:16.289 "params": { 00:21:16.289 "name": "static" 00:21:16.289 } 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "nvmf", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "nvmf_set_config", 00:21:16.289 "params": { 00:21:16.289 "discovery_filter": "match_any", 00:21:16.289 "admin_cmd_passthru": { 00:21:16.289 "identify_ctrlr": false 00:21:16.289 } 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_set_max_subsystems", 00:21:16.289 "params": { 00:21:16.289 "max_subsystems": 1024 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_set_crdt", 00:21:16.289 "params": { 00:21:16.289 "crdt1": 0, 00:21:16.289 "crdt2": 0, 00:21:16.289 "crdt3": 0 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_create_transport", 00:21:16.289 "params": { 00:21:16.289 "trtype": "TCP", 00:21:16.289 "max_queue_depth": 128, 00:21:16.289 "max_io_qpairs_per_ctrlr": 127, 00:21:16.289 "in_capsule_data_size": 4096, 00:21:16.289 "max_io_size": 131072, 00:21:16.289 "io_unit_size": 131072, 00:21:16.289 "max_aq_depth": 128, 00:21:16.289 "num_shared_buffers": 511, 00:21:16.289 "buf_cache_size": 4294967295, 00:21:16.289 "dif_insert_or_strip": false, 00:21:16.289 "zcopy": false, 00:21:16.289 "c2h_success": false, 00:21:16.289 "sock_priority": 0, 00:21:16.289 "abort_timeout_sec": 1, 00:21:16.289 "ack_timeout": 0, 00:21:16.289 "data_wr_pool_size": 0 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_create_subsystem", 00:21:16.289 "params": { 00:21:16.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.289 "allow_any_host": false, 00:21:16.289 "serial_number": "00000000000000000000", 00:21:16.289 "model_number": "SPDK bdev Controller", 00:21:16.289 "max_namespaces": 32, 00:21:16.289 "min_cntlid": 1, 00:21:16.289 "max_cntlid": 65519, 00:21:16.289 "ana_reporting": false 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_subsystem_add_host", 00:21:16.289 "params": { 00:21:16.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.289 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.289 "psk": "key0" 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_subsystem_add_ns", 00:21:16.289 "params": { 00:21:16.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.289 "namespace": { 00:21:16.289 "nsid": 1, 00:21:16.289 "bdev_name": "malloc0", 00:21:16.289 "nguid": "937BED9A995E4AA291EB23F8D487C7C9", 00:21:16.289 "uuid": "937bed9a-995e-4aa2-91eb-23f8d487c7c9", 00:21:16.289 "no_auto_visible": false 00:21:16.289 } 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "nvmf_subsystem_add_listener", 00:21:16.289 "params": { 00:21:16.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.289 "listen_address": { 00:21:16.289 "trtype": "TCP", 00:21:16.289 "adrfam": "IPv4", 00:21:16.289 "traddr": "10.0.0.2", 00:21:16.289 "trsvcid": "4420" 00:21:16.289 }, 00:21:16.289 "secure_channel": false, 00:21:16.289 "sock_impl": "ssl" 00:21:16.289 } 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }' 00:21:16.289 20:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:16.289 20:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:16.289 "subsystems": [ 00:21:16.289 { 00:21:16.289 "subsystem": "keyring", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "keyring_file_add_key", 00:21:16.289 "params": { 00:21:16.289 "name": "key0", 00:21:16.289 "path": "/tmp/tmp.sPMzwTZbj9" 00:21:16.289 } 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "iobuf", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "iobuf_set_options", 00:21:16.289 "params": { 00:21:16.289 "small_pool_count": 8192, 00:21:16.289 "large_pool_count": 1024, 00:21:16.289 "small_bufsize": 8192, 00:21:16.289 "large_bufsize": 135168 00:21:16.289 } 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "sock", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "sock_set_default_impl", 00:21:16.289 "params": { 00:21:16.289 "impl_name": "posix" 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "sock_impl_set_options", 00:21:16.289 "params": { 00:21:16.289 "impl_name": "ssl", 00:21:16.289 "recv_buf_size": 4096, 00:21:16.289 "send_buf_size": 4096, 00:21:16.289 "enable_recv_pipe": true, 00:21:16.289 "enable_quickack": false, 00:21:16.289 "enable_placement_id": 0, 00:21:16.289 "enable_zerocopy_send_server": true, 00:21:16.289 "enable_zerocopy_send_client": false, 00:21:16.289 "zerocopy_threshold": 0, 00:21:16.289 "tls_version": 0, 00:21:16.289 "enable_ktls": false 00:21:16.289 } 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "method": "sock_impl_set_options", 00:21:16.289 "params": { 00:21:16.289 "impl_name": "posix", 00:21:16.289 "recv_buf_size": 2097152, 00:21:16.289 "send_buf_size": 2097152, 00:21:16.289 "enable_recv_pipe": true, 00:21:16.289 "enable_quickack": false, 00:21:16.289 "enable_placement_id": 0, 00:21:16.289 "enable_zerocopy_send_server": true, 00:21:16.289 "enable_zerocopy_send_client": false, 00:21:16.289 "zerocopy_threshold": 0, 00:21:16.289 "tls_version": 0, 00:21:16.289 "enable_ktls": false 00:21:16.289 } 00:21:16.289 } 00:21:16.289 ] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "vmd", 00:21:16.289 "config": [] 00:21:16.289 }, 00:21:16.289 { 00:21:16.289 "subsystem": "accel", 00:21:16.289 "config": [ 00:21:16.289 { 00:21:16.289 "method": "accel_set_options", 00:21:16.289 "params": { 00:21:16.289 "small_cache_size": 128, 00:21:16.290 "large_cache_size": 16, 00:21:16.290 "task_count": 2048, 00:21:16.290 "sequence_count": 2048, 00:21:16.290 "buf_count": 2048 00:21:16.290 } 00:21:16.290 } 00:21:16.290 ] 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "subsystem": "bdev", 00:21:16.290 "config": [ 00:21:16.290 { 00:21:16.290 "method": "bdev_set_options", 00:21:16.290 "params": { 00:21:16.290 "bdev_io_pool_size": 65535, 00:21:16.290 "bdev_io_cache_size": 256, 00:21:16.290 "bdev_auto_examine": true, 00:21:16.290 "iobuf_small_cache_size": 128, 00:21:16.290 "iobuf_large_cache_size": 16 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_raid_set_options", 00:21:16.290 "params": { 00:21:16.290 "process_window_size_kb": 1024 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_iscsi_set_options", 00:21:16.290 "params": { 00:21:16.290 "timeout_sec": 30 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_nvme_set_options", 00:21:16.290 "params": { 00:21:16.290 "action_on_timeout": "none", 00:21:16.290 "timeout_us": 0, 00:21:16.290 "timeout_admin_us": 0, 00:21:16.290 "keep_alive_timeout_ms": 10000, 00:21:16.290 "arbitration_burst": 0, 00:21:16.290 "low_priority_weight": 0, 00:21:16.290 "medium_priority_weight": 0, 00:21:16.290 "high_priority_weight": 0, 00:21:16.290 "nvme_adminq_poll_period_us": 10000, 00:21:16.290 "nvme_ioq_poll_period_us": 0, 00:21:16.290 "io_queue_requests": 512, 00:21:16.290 "delay_cmd_submit": true, 00:21:16.290 "transport_retry_count": 4, 00:21:16.290 "bdev_retry_count": 3, 00:21:16.290 "transport_ack_timeout": 0, 00:21:16.290 "ctrlr_loss_timeout_sec": 0, 00:21:16.290 "reconnect_delay_sec": 0, 00:21:16.290 "fast_io_fail_timeout_sec": 0, 00:21:16.290 "disable_auto_failback": false, 00:21:16.290 "generate_uuids": false, 00:21:16.290 "transport_tos": 0, 00:21:16.290 "nvme_error_stat": false, 00:21:16.290 "rdma_srq_size": 0, 00:21:16.290 "io_path_stat": false, 00:21:16.290 "allow_accel_sequence": false, 00:21:16.290 "rdma_max_cq_size": 0, 00:21:16.290 "rdma_cm_event_timeout_ms": 0, 00:21:16.290 "dhchap_digests": [ 00:21:16.290 "sha256", 00:21:16.290 "sha384", 00:21:16.290 "sha512" 00:21:16.290 ], 00:21:16.290 "dhchap_dhgroups": [ 00:21:16.290 "null", 00:21:16.290 "ffdhe2048", 00:21:16.290 "ffdhe3072", 00:21:16.290 "ffdhe4096", 00:21:16.290 "ffdhe6144", 00:21:16.290 "ffdhe8192" 00:21:16.290 ] 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_nvme_attach_controller", 00:21:16.290 "params": { 00:21:16.290 "name": "nvme0", 00:21:16.290 "trtype": "TCP", 00:21:16.290 "adrfam": "IPv4", 00:21:16.290 "traddr": "10.0.0.2", 00:21:16.290 "trsvcid": "4420", 00:21:16.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.290 "prchk_reftag": false, 00:21:16.290 "prchk_guard": false, 00:21:16.290 "ctrlr_loss_timeout_sec": 0, 00:21:16.290 "reconnect_delay_sec": 0, 00:21:16.290 "fast_io_fail_timeout_sec": 0, 00:21:16.290 "psk": "key0", 00:21:16.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.290 "hdgst": false, 00:21:16.290 "ddgst": false 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_nvme_set_hotplug", 00:21:16.290 "params": { 00:21:16.290 "period_us": 100000, 00:21:16.290 "enable": false 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_enable_histogram", 00:21:16.290 "params": { 00:21:16.290 "name": "nvme0n1", 00:21:16.290 "enable": true 00:21:16.290 } 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "method": "bdev_wait_for_examine" 00:21:16.290 } 00:21:16.290 ] 00:21:16.290 }, 00:21:16.290 { 00:21:16.290 "subsystem": "nbd", 00:21:16.290 "config": [] 00:21:16.290 } 00:21:16.290 ] 00:21:16.290 }' 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1626550 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1626550 ']' 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1626550 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1626550 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1626550' 00:21:16.290 killing process with pid 1626550 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1626550 00:21:16.290 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.290 00:21:16.290 Latency(us) 00:21:16.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.290 =================================================================================================================== 00:21:16.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.290 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1626550 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1626210 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1626210 ']' 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1626210 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1626210 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1626210' 00:21:16.551 killing process with pid 1626210 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1626210 00:21:16.551 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1626210 00:21:16.811 20:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:16.811 20:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.811 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.811 20:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:16.811 "subsystems": [ 00:21:16.811 { 00:21:16.811 "subsystem": "keyring", 00:21:16.811 "config": [ 00:21:16.811 { 00:21:16.811 "method": "keyring_file_add_key", 00:21:16.811 "params": { 00:21:16.811 "name": "key0", 00:21:16.811 "path": "/tmp/tmp.sPMzwTZbj9" 00:21:16.811 } 00:21:16.811 } 00:21:16.811 ] 00:21:16.811 }, 00:21:16.811 { 00:21:16.811 "subsystem": "iobuf", 00:21:16.811 "config": [ 00:21:16.811 { 00:21:16.811 "method": "iobuf_set_options", 00:21:16.811 "params": { 00:21:16.811 "small_pool_count": 8192, 00:21:16.811 "large_pool_count": 1024, 00:21:16.811 "small_bufsize": 8192, 00:21:16.811 "large_bufsize": 135168 00:21:16.811 } 00:21:16.811 } 00:21:16.812 ] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "sock", 00:21:16.812 "config": [ 00:21:16.812 { 00:21:16.812 "method": "sock_set_default_impl", 00:21:16.812 "params": { 00:21:16.812 "impl_name": "posix" 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "sock_impl_set_options", 00:21:16.812 "params": { 00:21:16.812 "impl_name": "ssl", 00:21:16.812 "recv_buf_size": 4096, 00:21:16.812 "send_buf_size": 4096, 00:21:16.812 "enable_recv_pipe": true, 00:21:16.812 "enable_quickack": false, 00:21:16.812 "enable_placement_id": 0, 00:21:16.812 "enable_zerocopy_send_server": true, 00:21:16.812 "enable_zerocopy_send_client": false, 00:21:16.812 "zerocopy_threshold": 0, 00:21:16.812 "tls_version": 0, 00:21:16.812 "enable_ktls": false 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "sock_impl_set_options", 00:21:16.812 "params": { 00:21:16.812 "impl_name": "posix", 00:21:16.812 "recv_buf_size": 2097152, 00:21:16.812 "send_buf_size": 2097152, 00:21:16.812 "enable_recv_pipe": true, 00:21:16.812 "enable_quickack": false, 00:21:16.812 "enable_placement_id": 0, 00:21:16.812 "enable_zerocopy_send_server": true, 00:21:16.812 "enable_zerocopy_send_client": false, 00:21:16.812 "zerocopy_threshold": 0, 00:21:16.812 "tls_version": 0, 00:21:16.812 "enable_ktls": false 00:21:16.812 } 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "vmd", 00:21:16.812 "config": [] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "accel", 00:21:16.812 "config": [ 00:21:16.812 { 00:21:16.812 "method": "accel_set_options", 00:21:16.812 "params": { 00:21:16.812 "small_cache_size": 128, 00:21:16.812 "large_cache_size": 16, 00:21:16.812 "task_count": 2048, 00:21:16.812 "sequence_count": 2048, 00:21:16.812 "buf_count": 2048 00:21:16.812 } 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "bdev", 00:21:16.812 "config": [ 00:21:16.812 { 00:21:16.812 "method": "bdev_set_options", 00:21:16.812 "params": { 00:21:16.812 "bdev_io_pool_size": 65535, 00:21:16.812 "bdev_io_cache_size": 256, 00:21:16.812 "bdev_auto_examine": true, 00:21:16.812 "iobuf_small_cache_size": 128, 00:21:16.812 "iobuf_large_cache_size": 16 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_raid_set_options", 00:21:16.812 "params": { 00:21:16.812 "process_window_size_kb": 1024 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_iscsi_set_options", 00:21:16.812 "params": { 00:21:16.812 "timeout_sec": 30 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_nvme_set_options", 00:21:16.812 "params": { 00:21:16.812 "action_on_timeout": "none", 00:21:16.812 "timeout_us": 0, 00:21:16.812 "timeout_admin_us": 0, 00:21:16.812 "keep_alive_timeout_ms": 10000, 00:21:16.812 "arbitration_burst": 0, 00:21:16.812 "low_priority_weight": 0, 00:21:16.812 "medium_priority_weight": 0, 00:21:16.812 "high_priority_weight": 0, 00:21:16.812 "nvme_adminq_poll_period_us": 10000, 00:21:16.812 "nvme_ioq_poll_period_us": 0, 00:21:16.812 "io_queue_requests": 0, 00:21:16.812 "delay_cmd_submit": true, 00:21:16.812 "transport_retry_count": 4, 00:21:16.812 "bdev_retry_count": 3, 00:21:16.812 "transport_ack_timeout": 0, 00:21:16.812 "ctrlr_loss_timeout_sec": 0, 00:21:16.812 "reconnect_delay_sec": 0, 00:21:16.812 "fast_io_fail_timeout_sec": 0, 00:21:16.812 "disable_auto_failback": false, 00:21:16.812 "generate_uuids": false, 00:21:16.812 "transport_tos": 0, 00:21:16.812 "nvme_error_stat": false, 00:21:16.812 "rdma_srq_size": 0, 00:21:16.812 "io_path_stat": false, 00:21:16.812 "allow_accel_sequence": false, 00:21:16.812 "rdma_max_cq_size": 0, 00:21:16.812 "rdma_cm_event_timeout_ms": 0, 00:21:16.812 "dhchap_digests": [ 00:21:16.812 "sha256", 00:21:16.812 "sha384", 00:21:16.812 "sha512" 00:21:16.812 ], 00:21:16.812 "dhchap_dhgroups": [ 00:21:16.812 "null", 00:21:16.812 "ffdhe2048", 00:21:16.812 "ffdhe3072", 00:21:16.812 "ffdhe4096", 00:21:16.812 "ffdhe6144", 00:21:16.812 "ffdhe8192" 00:21:16.812 ] 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_nvme_set_hotplug", 00:21:16.812 "params": { 00:21:16.812 "period_us": 100000, 00:21:16.812 "enable": false 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_malloc_create", 00:21:16.812 "params": { 00:21:16.812 "name": "malloc0", 00:21:16.812 "num_blocks": 8192, 00:21:16.812 "block_size": 4096, 00:21:16.812 "physical_block_size": 4096, 00:21:16.812 "uuid": "937bed9a-995e-4aa2-91eb-23f8d487c7c9", 00:21:16.812 "optimal_io_boundary": 0 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "bdev_wait_for_examine" 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "nbd", 00:21:16.812 "config": [] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "scheduler", 00:21:16.812 "config": [ 00:21:16.812 { 00:21:16.812 "method": "framework_set_scheduler", 00:21:16.812 "params": { 00:21:16.812 "name": "static" 00:21:16.812 } 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "subsystem": "nvmf", 00:21:16.812 "config": [ 00:21:16.812 { 00:21:16.812 "method": "nvmf_set_config", 00:21:16.812 "params": { 00:21:16.812 "discovery_filter": "match_any", 00:21:16.812 "admin_cmd_passthru": { 00:21:16.812 "identify_ctrlr": false 00:21:16.812 } 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_set_max_subsystems", 00:21:16.812 "params": { 00:21:16.812 "max_subsystems": 1024 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_set_crdt", 00:21:16.812 "params": { 00:21:16.812 "crdt1": 0, 00:21:16.812 "crdt2": 0, 00:21:16.812 "crdt3": 0 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_create_transport", 00:21:16.812 "params": { 00:21:16.812 "trtype": "TCP", 00:21:16.812 "max_queue_depth": 128, 00:21:16.812 "max_io_qpairs_per_ctrlr": 127, 00:21:16.812 "in_capsule_data_size": 4096, 00:21:16.812 "max_io_size": 131072, 00:21:16.812 "io_unit_size": 131072, 00:21:16.812 "max_aq_depth": 128, 00:21:16.812 "num_shared_buffers": 511, 00:21:16.812 "buf_cache_size": 4294967295, 00:21:16.812 "dif_insert_or_strip": false, 00:21:16.812 "zcopy": false, 00:21:16.812 "c2h_success": false, 00:21:16.812 "sock_priority": 0, 00:21:16.812 "abort_timeout_sec": 1, 00:21:16.812 "ack_timeout": 0, 00:21:16.812 "data_wr_pool_size": 0 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_create_subsystem", 00:21:16.812 "params": { 00:21:16.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.812 "allow_any_host": false, 00:21:16.812 "serial_number": "00000000000000000000", 00:21:16.812 "model_number": "SPDK bdev Controller", 00:21:16.812 "max_namespaces": 32, 00:21:16.812 "min_cntlid": 1, 00:21:16.812 "max_cntlid": 65519, 00:21:16.812 "ana_reporting": false 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_subsystem_add_host", 00:21:16.812 "params": { 00:21:16.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.812 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.812 "psk": "key0" 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_subsystem_add_ns", 00:21:16.812 "params": { 00:21:16.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.812 "namespace": { 00:21:16.812 "nsid": 1, 00:21:16.812 "bdev_name": "malloc0", 00:21:16.812 "nguid": "937BED9A995E4AA291EB23F8D487C7C9", 00:21:16.812 "uuid": "937bed9a-995e-4aa2-91eb-23f8d487c7c9", 00:21:16.812 "no_auto_visible": false 00:21:16.812 } 00:21:16.812 } 00:21:16.812 }, 00:21:16.812 { 00:21:16.812 "method": "nvmf_subsystem_add_listener", 00:21:16.812 "params": { 00:21:16.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.812 "listen_address": { 00:21:16.812 "trtype": "TCP", 00:21:16.812 "adrfam": "IPv4", 00:21:16.812 "traddr": "10.0.0.2", 00:21:16.812 "trsvcid": "4420" 00:21:16.812 }, 00:21:16.812 "secure_channel": false, 00:21:16.812 "sock_impl": "ssl" 00:21:16.812 } 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 } 00:21:16.812 ] 00:21:16.812 }' 00:21:16.812 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.812 20:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1627219 00:21:16.812 20:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1627219 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627219 ']' 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.813 20:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.813 [2024-07-15 20:57:20.548212] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:16.813 [2024-07-15 20:57:20.548267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.813 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.813 [2024-07-15 20:57:20.613543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.813 [2024-07-15 20:57:20.679223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.813 [2024-07-15 20:57:20.679259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.813 [2024-07-15 20:57:20.679266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.813 [2024-07-15 20:57:20.679272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.813 [2024-07-15 20:57:20.679278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.813 [2024-07-15 20:57:20.679326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.073 [2024-07-15 20:57:20.876350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.073 [2024-07-15 20:57:20.908372] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.073 [2024-07-15 20:57:20.919433] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1627266 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1627266 /var/tmp/bdevperf.sock 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1627266 ']' 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.643 20:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:17.643 "subsystems": [ 00:21:17.643 { 00:21:17.643 "subsystem": "keyring", 00:21:17.643 "config": [ 00:21:17.643 { 00:21:17.643 "method": "keyring_file_add_key", 00:21:17.643 "params": { 00:21:17.643 "name": "key0", 00:21:17.643 "path": "/tmp/tmp.sPMzwTZbj9" 00:21:17.643 } 00:21:17.643 } 00:21:17.643 ] 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "subsystem": "iobuf", 00:21:17.643 "config": [ 00:21:17.643 { 00:21:17.643 "method": "iobuf_set_options", 00:21:17.643 "params": { 00:21:17.643 "small_pool_count": 8192, 00:21:17.643 "large_pool_count": 1024, 00:21:17.643 "small_bufsize": 8192, 00:21:17.643 "large_bufsize": 135168 00:21:17.643 } 00:21:17.643 } 00:21:17.643 ] 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "subsystem": "sock", 00:21:17.643 "config": [ 00:21:17.643 { 00:21:17.643 "method": "sock_set_default_impl", 00:21:17.643 "params": { 00:21:17.643 "impl_name": "posix" 00:21:17.643 } 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "method": "sock_impl_set_options", 00:21:17.643 "params": { 00:21:17.643 "impl_name": "ssl", 00:21:17.643 "recv_buf_size": 4096, 00:21:17.643 "send_buf_size": 4096, 00:21:17.643 "enable_recv_pipe": true, 00:21:17.643 "enable_quickack": false, 00:21:17.643 "enable_placement_id": 0, 00:21:17.643 "enable_zerocopy_send_server": true, 00:21:17.643 "enable_zerocopy_send_client": false, 00:21:17.643 "zerocopy_threshold": 0, 00:21:17.643 "tls_version": 0, 00:21:17.643 "enable_ktls": false 00:21:17.643 } 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "method": "sock_impl_set_options", 00:21:17.643 "params": { 00:21:17.643 "impl_name": "posix", 00:21:17.643 "recv_buf_size": 2097152, 00:21:17.643 "send_buf_size": 2097152, 00:21:17.643 "enable_recv_pipe": true, 00:21:17.643 "enable_quickack": false, 00:21:17.643 "enable_placement_id": 0, 00:21:17.643 "enable_zerocopy_send_server": true, 00:21:17.643 "enable_zerocopy_send_client": false, 00:21:17.643 "zerocopy_threshold": 0, 00:21:17.643 "tls_version": 0, 00:21:17.643 "enable_ktls": false 00:21:17.643 } 00:21:17.643 } 00:21:17.643 ] 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "subsystem": "vmd", 00:21:17.643 "config": [] 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "subsystem": "accel", 00:21:17.643 "config": [ 00:21:17.643 { 00:21:17.643 "method": "accel_set_options", 00:21:17.643 "params": { 00:21:17.643 "small_cache_size": 128, 00:21:17.643 "large_cache_size": 16, 00:21:17.643 "task_count": 2048, 00:21:17.643 "sequence_count": 2048, 00:21:17.643 "buf_count": 2048 00:21:17.643 } 00:21:17.643 } 00:21:17.643 ] 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "subsystem": "bdev", 00:21:17.643 "config": [ 00:21:17.643 { 00:21:17.643 "method": "bdev_set_options", 00:21:17.643 "params": { 00:21:17.643 "bdev_io_pool_size": 65535, 00:21:17.643 "bdev_io_cache_size": 256, 00:21:17.643 "bdev_auto_examine": true, 00:21:17.643 "iobuf_small_cache_size": 128, 00:21:17.643 "iobuf_large_cache_size": 16 00:21:17.643 } 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "method": "bdev_raid_set_options", 00:21:17.643 "params": { 00:21:17.643 "process_window_size_kb": 1024 00:21:17.643 } 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "method": "bdev_iscsi_set_options", 00:21:17.643 "params": { 00:21:17.643 "timeout_sec": 30 00:21:17.643 } 00:21:17.643 }, 00:21:17.643 { 00:21:17.643 "method": "bdev_nvme_set_options", 00:21:17.643 "params": { 00:21:17.643 "action_on_timeout": "none", 00:21:17.643 "timeout_us": 0, 00:21:17.643 "timeout_admin_us": 0, 00:21:17.643 "keep_alive_timeout_ms": 10000, 00:21:17.643 "arbitration_burst": 0, 00:21:17.643 "low_priority_weight": 0, 00:21:17.643 "medium_priority_weight": 0, 00:21:17.643 "high_priority_weight": 0, 00:21:17.643 "nvme_adminq_poll_period_us": 10000, 00:21:17.643 "nvme_ioq_poll_period_us": 0, 00:21:17.643 "io_queue_requests": 512, 00:21:17.643 "delay_cmd_submit": true, 00:21:17.643 "transport_retry_count": 4, 00:21:17.643 "bdev_retry_count": 3, 00:21:17.643 "transport_ack_timeout": 0, 00:21:17.643 "ctrlr_loss_timeout_sec": 0, 00:21:17.643 "reconnect_delay_sec": 0, 00:21:17.643 "fast_io_fail_timeout_sec": 0, 00:21:17.643 "disable_auto_failback": false, 00:21:17.643 "generate_uuids": false, 00:21:17.643 "transport_tos": 0, 00:21:17.643 "nvme_error_stat": false, 00:21:17.643 "rdma_srq_size": 0, 00:21:17.643 "io_path_stat": false, 00:21:17.643 "allow_accel_sequence": false, 00:21:17.643 "rdma_max_cq_size": 0, 00:21:17.643 "rdma_cm_event_timeout_ms": 0, 00:21:17.643 "dhchap_digests": [ 00:21:17.643 "sha256", 00:21:17.643 "sha384", 00:21:17.644 "sha512" 00:21:17.644 ], 00:21:17.644 "dhchap_dhgroups": [ 00:21:17.644 "null", 00:21:17.644 "ffdhe2048", 00:21:17.644 "ffdhe3072", 00:21:17.644 "ffdhe4096", 00:21:17.644 "ffdhe6144", 00:21:17.644 "ffdhe8192" 00:21:17.644 ] 00:21:17.644 } 00:21:17.644 }, 00:21:17.644 { 00:21:17.644 "method": "bdev_nvme_attach_controller", 00:21:17.644 "params": { 00:21:17.644 "name": "nvme0", 00:21:17.644 "trtype": "TCP", 00:21:17.644 "adrfam": "IPv4", 00:21:17.644 "traddr": "10.0.0.2", 00:21:17.644 "trsvcid": "4420", 00:21:17.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.644 "prchk_reftag": false, 00:21:17.644 "prchk_guard": false, 00:21:17.644 "ctrlr_loss_timeout_sec": 0, 00:21:17.644 "reconnect_delay_sec": 0, 00:21:17.644 "fast_io_fail_timeout_sec": 0, 00:21:17.644 "psk": "key0", 00:21:17.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.644 "hdgst": false, 00:21:17.644 "ddgst": false 00:21:17.644 } 00:21:17.644 }, 00:21:17.644 { 00:21:17.644 "method": "bdev_nvme_set_hotplug", 00:21:17.644 "params": { 00:21:17.644 "period_us": 100000, 00:21:17.644 "enable": false 00:21:17.644 } 00:21:17.644 }, 00:21:17.644 { 00:21:17.644 "method": "bdev_enable_histogram", 00:21:17.644 "params": { 00:21:17.644 "name": "nvme0n1", 00:21:17.644 "enable": true 00:21:17.644 } 00:21:17.644 }, 00:21:17.644 { 00:21:17.644 "method": "bdev_wait_for_examine" 00:21:17.644 } 00:21:17.644 ] 00:21:17.644 }, 00:21:17.644 { 00:21:17.644 "subsystem": "nbd", 00:21:17.644 "config": [] 00:21:17.644 } 00:21:17.644 ] 00:21:17.644 }' 00:21:17.644 [2024-07-15 20:57:21.401876] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:17.644 [2024-07-15 20:57:21.401944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627266 ] 00:21:17.644 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.644 [2024-07-15 20:57:21.477320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.644 [2024-07-15 20:57:21.531086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.904 [2024-07-15 20:57:21.664796] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.532 20:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.532 Running I/O for 1 seconds... 00:21:19.935 00:21:19.935 Latency(us) 00:21:19.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.935 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.935 Verification LBA range: start 0x0 length 0x2000 00:21:19.935 nvme0n1 : 1.04 2509.18 9.80 0.00 0.00 50144.25 4642.13 62040.75 00:21:19.935 =================================================================================================================== 00:21:19.935 Total : 2509.18 9.80 0.00 0.00 50144.25 4642.13 62040.75 00:21:19.935 0 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:19.935 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:19.936 nvmf_trace.0 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1627266 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627266 ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627266 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627266 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627266' 00:21:19.936 killing process with pid 1627266 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627266 00:21:19.936 Received shutdown signal, test time was about 1.000000 seconds 00:21:19.936 00:21:19.936 Latency(us) 00:21:19.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.936 =================================================================================================================== 00:21:19.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627266 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.936 rmmod nvme_tcp 00:21:19.936 rmmod nvme_fabrics 00:21:19.936 rmmod nvme_keyring 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1627219 ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1627219 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1627219 ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1627219 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.936 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627219 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627219' 00:21:20.197 killing process with pid 1627219 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1627219 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1627219 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.197 20:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.743 20:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.743 20:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZjY0TIshd7 /tmp/tmp.WunW11Eceq /tmp/tmp.sPMzwTZbj9 00:21:22.743 00:21:22.743 real 1m21.818s 00:21:22.743 user 2m3.332s 00:21:22.743 sys 0m28.942s 00:21:22.743 20:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.743 20:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.743 ************************************ 00:21:22.744 END TEST nvmf_tls 00:21:22.744 ************************************ 00:21:22.744 20:57:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:22.744 20:57:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:22.744 20:57:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:22.744 20:57:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.744 20:57:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.744 ************************************ 00:21:22.744 START TEST nvmf_fips 00:21:22.744 ************************************ 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:22.744 * Looking for test storage... 00:21:22.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:22.744 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:22.745 Error setting digest 00:21:22.745 00E23CD7CF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:22.745 00E23CD7CF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.745 20:57:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.888 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:21:30.888 00:21:30.888 --- 10.0.0.2 ping statistics --- 00:21:30.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.888 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:21:30.888 00:21:30.888 --- 10.0.0.1 ping statistics --- 00:21:30.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.888 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1631959 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1631959 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1631959 ']' 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.888 20:57:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 [2024-07-15 20:57:33.729079] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:30.888 [2024-07-15 20:57:33.729137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.888 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.888 [2024-07-15 20:57:33.809069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.888 [2024-07-15 20:57:33.872494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.888 [2024-07-15 20:57:33.872534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.888 [2024-07-15 20:57:33.872541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.888 [2024-07-15 20:57:33.872548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.888 [2024-07-15 20:57:33.872553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.888 [2024-07-15 20:57:33.872574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.888 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.888 [2024-07-15 20:57:34.725277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.888 [2024-07-15 20:57:34.741276] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.888 [2024-07-15 20:57:34.741569] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.888 [2024-07-15 20:57:34.771364] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:30.888 malloc0 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1632308 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1632308 /var/tmp/bdevperf.sock 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1632308 ']' 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.148 20:57:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.148 [2024-07-15 20:57:34.871688] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:21:31.148 [2024-07-15 20:57:34.871768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632308 ] 00:21:31.148 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.148 [2024-07-15 20:57:34.928811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.148 [2024-07-15 20:57:34.992373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.091 20:57:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.091 20:57:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:32.091 20:57:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.091 [2024-07-15 20:57:35.775953] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.091 [2024-07-15 20:57:35.776010] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:32.091 TLSTESTn1 00:21:32.091 20:57:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.091 Running I/O for 10 seconds... 00:21:44.323 00:21:44.323 Latency(us) 00:21:44.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.323 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.323 Verification LBA range: start 0x0 length 0x2000 00:21:44.323 TLSTESTn1 : 10.06 3192.19 12.47 0.00 0.00 39966.09 6116.69 115343.36 00:21:44.323 =================================================================================================================== 00:21:44.323 Total : 3192.19 12.47 0.00 0.00 39966.09 6116.69 115343.36 00:21:44.323 0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.323 nvmf_trace.0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1632308 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1632308 ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1632308 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632308 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632308' 00:21:44.323 killing process with pid 1632308 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1632308 00:21:44.323 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.323 00:21:44.323 Latency(us) 00:21:44.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.323 =================================================================================================================== 00:21:44.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.323 [2024-07-15 20:57:46.219044] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1632308 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.323 rmmod nvme_tcp 00:21:44.323 rmmod nvme_fabrics 00:21:44.323 rmmod nvme_keyring 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1631959 ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1631959 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1631959 ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1631959 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631959 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631959' 00:21:44.323 killing process with pid 1631959 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1631959 00:21:44.323 [2024-07-15 20:57:46.459081] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1631959 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.323 20:57:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.895 20:57:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.895 20:57:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:44.895 00:21:44.895 real 0m22.512s 00:21:44.895 user 0m23.258s 00:21:44.895 sys 0m9.993s 00:21:44.895 20:57:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.895 20:57:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.895 ************************************ 00:21:44.895 END TEST nvmf_fips 00:21:44.895 ************************************ 00:21:44.895 20:57:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:44.895 20:57:48 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:44.895 20:57:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:44.895 20:57:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:44.895 20:57:48 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:44.895 20:57:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:44.895 20:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:51.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.488 20:57:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:51.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:51.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:51.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:51.751 20:57:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:51.751 20:57:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.751 20:57:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.751 20:57:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.751 ************************************ 00:21:51.751 START TEST nvmf_perf_adq 00:21:51.751 ************************************ 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:51.751 * Looking for test storage... 00:21:51.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.751 20:57:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.367 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:58.628 20:58:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:00.007 20:58:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:01.918 20:58:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.204 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:22:07.205 00:22:07.205 --- 10.0.0.2 ping statistics --- 00:22:07.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.205 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:22:07.205 00:22:07.205 --- 10.0.0.1 ping statistics --- 00:22:07.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.205 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1643983 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1643983 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1643983 ']' 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.205 20:58:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.205 [2024-07-15 20:58:11.055486] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:22:07.205 [2024-07-15 20:58:11.055571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.205 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.466 [2024-07-15 20:58:11.127422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.466 [2024-07-15 20:58:11.204078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.466 [2024-07-15 20:58:11.204117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.466 [2024-07-15 20:58:11.204130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.466 [2024-07-15 20:58:11.204137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.466 [2024-07-15 20:58:11.204143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.466 [2024-07-15 20:58:11.204274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.466 [2024-07-15 20:58:11.204480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.466 [2024-07-15 20:58:11.204639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.466 [2024-07-15 20:58:11.204639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.037 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.038 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:08.298 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.298 20:58:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 [2024-07-15 20:58:12.002160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 Malloc1 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 [2024-07-15 20:58:12.061528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1644230 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:08.298 20:58:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:08.298 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:10.212 "tick_rate": 2400000000, 00:22:10.212 "poll_groups": [ 00:22:10.212 { 00:22:10.212 "name": "nvmf_tgt_poll_group_000", 00:22:10.212 "admin_qpairs": 1, 00:22:10.212 "io_qpairs": 1, 00:22:10.212 "current_admin_qpairs": 1, 00:22:10.212 "current_io_qpairs": 1, 00:22:10.212 "pending_bdev_io": 0, 00:22:10.212 "completed_nvme_io": 19887, 00:22:10.212 "transports": [ 00:22:10.212 { 00:22:10.212 "trtype": "TCP" 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 }, 00:22:10.212 { 00:22:10.212 "name": "nvmf_tgt_poll_group_001", 00:22:10.212 "admin_qpairs": 0, 00:22:10.212 "io_qpairs": 1, 00:22:10.212 "current_admin_qpairs": 0, 00:22:10.212 "current_io_qpairs": 1, 00:22:10.212 "pending_bdev_io": 0, 00:22:10.212 "completed_nvme_io": 29599, 00:22:10.212 "transports": [ 00:22:10.212 { 00:22:10.212 "trtype": "TCP" 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 }, 00:22:10.212 { 00:22:10.212 "name": "nvmf_tgt_poll_group_002", 00:22:10.212 "admin_qpairs": 0, 00:22:10.212 "io_qpairs": 1, 00:22:10.212 "current_admin_qpairs": 0, 00:22:10.212 "current_io_qpairs": 1, 00:22:10.212 "pending_bdev_io": 0, 00:22:10.212 "completed_nvme_io": 19862, 00:22:10.212 "transports": [ 00:22:10.212 { 00:22:10.212 "trtype": "TCP" 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 }, 00:22:10.212 { 00:22:10.212 "name": "nvmf_tgt_poll_group_003", 00:22:10.212 "admin_qpairs": 0, 00:22:10.212 "io_qpairs": 1, 00:22:10.212 "current_admin_qpairs": 0, 00:22:10.212 "current_io_qpairs": 1, 00:22:10.212 "pending_bdev_io": 0, 00:22:10.212 "completed_nvme_io": 21543, 00:22:10.212 "transports": [ 00:22:10.212 { 00:22:10.212 "trtype": "TCP" 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 }' 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:10.212 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:10.473 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:10.473 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:10.473 20:58:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1644230 00:22:18.610 Initializing NVMe Controllers 00:22:18.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:18.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:18.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:18.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:18.610 Initialization complete. Launching workers. 00:22:18.610 ======================================================== 00:22:18.610 Latency(us) 00:22:18.610 Device Information : IOPS MiB/s Average min max 00:22:18.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12144.50 47.44 5270.79 1964.51 7922.86 00:22:18.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15186.10 59.32 4214.51 1400.37 8196.50 00:22:18.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13867.70 54.17 4614.65 1370.49 12413.75 00:22:18.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13502.90 52.75 4748.51 1439.39 44087.39 00:22:18.610 ======================================================== 00:22:18.610 Total : 54701.20 213.68 4682.28 1370.49 44087.39 00:22:18.610 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.610 rmmod nvme_tcp 00:22:18.610 rmmod nvme_fabrics 00:22:18.610 rmmod nvme_keyring 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1643983 ']' 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1643983 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1643983 ']' 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1643983 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1643983 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1643983' 00:22:18.610 killing process with pid 1643983 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1643983 00:22:18.610 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1643983 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.871 20:58:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.787 20:58:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.787 20:58:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:20.787 20:58:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:22.703 20:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:24.617 20:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.929 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:29.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:29.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:29.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:29.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:22:29.930 00:22:29.930 --- 10.0.0.2 ping statistics --- 00:22:29.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.930 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:22:29.930 00:22:29.930 --- 10.0.0.1 ping statistics --- 00:22:29.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.930 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:29.930 net.core.busy_poll = 1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:29.930 net.core.busy_read = 1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1648859 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1648859 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1648859 ']' 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.930 20:58:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.191 [2024-07-15 20:58:33.855486] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:22:30.191 [2024-07-15 20:58:33.855554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.191 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.191 [2024-07-15 20:58:33.926279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.191 [2024-07-15 20:58:34.002779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.191 [2024-07-15 20:58:34.002818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.191 [2024-07-15 20:58:34.002825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.191 [2024-07-15 20:58:34.002832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.191 [2024-07-15 20:58:34.002837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.191 [2024-07-15 20:58:34.002978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.191 [2024-07-15 20:58:34.003096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.191 [2024-07-15 20:58:34.003254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.191 [2024-07-15 20:58:34.003255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.762 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.762 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:30.762 20:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.762 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.762 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 [2024-07-15 20:58:34.801464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.024 Malloc1 00:22:31.024 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.025 [2024-07-15 20:58:34.860772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1649047 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:31.025 20:58:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:31.025 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:33.570 "tick_rate": 2400000000, 00:22:33.570 "poll_groups": [ 00:22:33.570 { 00:22:33.570 "name": "nvmf_tgt_poll_group_000", 00:22:33.570 "admin_qpairs": 1, 00:22:33.570 "io_qpairs": 3, 00:22:33.570 "current_admin_qpairs": 1, 00:22:33.570 "current_io_qpairs": 3, 00:22:33.570 "pending_bdev_io": 0, 00:22:33.570 "completed_nvme_io": 32229, 00:22:33.570 "transports": [ 00:22:33.570 { 00:22:33.570 "trtype": "TCP" 00:22:33.570 } 00:22:33.570 ] 00:22:33.570 }, 00:22:33.570 { 00:22:33.570 "name": "nvmf_tgt_poll_group_001", 00:22:33.570 "admin_qpairs": 0, 00:22:33.570 "io_qpairs": 1, 00:22:33.570 "current_admin_qpairs": 0, 00:22:33.570 "current_io_qpairs": 1, 00:22:33.570 "pending_bdev_io": 0, 00:22:33.570 "completed_nvme_io": 34968, 00:22:33.570 "transports": [ 00:22:33.570 { 00:22:33.570 "trtype": "TCP" 00:22:33.570 } 00:22:33.570 ] 00:22:33.570 }, 00:22:33.570 { 00:22:33.570 "name": "nvmf_tgt_poll_group_002", 00:22:33.570 "admin_qpairs": 0, 00:22:33.570 "io_qpairs": 0, 00:22:33.570 "current_admin_qpairs": 0, 00:22:33.570 "current_io_qpairs": 0, 00:22:33.570 "pending_bdev_io": 0, 00:22:33.570 "completed_nvme_io": 0, 00:22:33.570 "transports": [ 00:22:33.570 { 00:22:33.570 "trtype": "TCP" 00:22:33.570 } 00:22:33.570 ] 00:22:33.570 }, 00:22:33.570 { 00:22:33.570 "name": "nvmf_tgt_poll_group_003", 00:22:33.570 "admin_qpairs": 0, 00:22:33.570 "io_qpairs": 0, 00:22:33.570 "current_admin_qpairs": 0, 00:22:33.570 "current_io_qpairs": 0, 00:22:33.570 "pending_bdev_io": 0, 00:22:33.570 "completed_nvme_io": 0, 00:22:33.570 "transports": [ 00:22:33.570 { 00:22:33.570 "trtype": "TCP" 00:22:33.570 } 00:22:33.570 ] 00:22:33.570 } 00:22:33.570 ] 00:22:33.570 }' 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:33.570 20:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1649047 00:22:41.763 Initializing NVMe Controllers 00:22:41.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:41.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:41.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:41.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:41.763 Initialization complete. Launching workers. 00:22:41.763 ======================================================== 00:22:41.763 Latency(us) 00:22:41.763 Device Information : IOPS MiB/s Average min max 00:22:41.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6707.50 26.20 9543.33 1482.30 59475.22 00:22:41.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 17263.59 67.44 3707.19 1200.72 44635.07 00:22:41.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6272.20 24.50 10204.02 1503.67 57594.57 00:22:41.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8754.90 34.20 7325.58 1106.74 53694.59 00:22:41.764 ======================================================== 00:22:41.764 Total : 38998.18 152.34 6568.19 1106.74 59475.22 00:22:41.764 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.764 rmmod nvme_tcp 00:22:41.764 rmmod nvme_fabrics 00:22:41.764 rmmod nvme_keyring 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1648859 ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1648859 ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1648859' 00:22:41.764 killing process with pid 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1648859 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.764 20:58:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.679 20:58:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.679 20:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:43.679 00:22:43.679 real 0m51.968s 00:22:43.679 user 2m44.400s 00:22:43.679 sys 0m12.997s 00:22:43.679 20:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.679 20:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.679 ************************************ 00:22:43.679 END TEST nvmf_perf_adq 00:22:43.679 ************************************ 00:22:43.679 20:58:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:43.679 20:58:47 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:43.679 20:58:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.679 20:58:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.679 20:58:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.679 ************************************ 00:22:43.679 START TEST nvmf_shutdown 00:22:43.679 ************************************ 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:43.679 * Looking for test storage... 00:22:43.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.679 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.680 20:58:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.941 ************************************ 00:22:43.942 START TEST nvmf_shutdown_tc1 00:22:43.942 ************************************ 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.942 20:58:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.538 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.538 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.538 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.538 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:22:50.821 00:22:50.821 --- 10.0.0.2 ping statistics --- 00:22:50.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.821 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:22:50.821 00:22:50.821 --- 10.0.0.1 ping statistics --- 00:22:50.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.821 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.821 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1655235 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1655235 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1655235 ']' 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.822 20:58:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.822 [2024-07-15 20:58:54.682245] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:22:50.822 [2024-07-15 20:58:54.682309] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.083 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.083 [2024-07-15 20:58:54.770755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.083 [2024-07-15 20:58:54.863096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.083 [2024-07-15 20:58:54.863162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.083 [2024-07-15 20:58:54.863171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.083 [2024-07-15 20:58:54.863178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.083 [2024-07-15 20:58:54.863184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.083 [2024-07-15 20:58:54.863375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.083 [2024-07-15 20:58:54.863550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.083 [2024-07-15 20:58:54.863716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.083 [2024-07-15 20:58:54.863716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.654 [2024-07-15 20:58:55.520704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.654 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.914 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.914 Malloc1 00:22:51.914 [2024-07-15 20:58:55.624110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.914 Malloc2 00:22:51.914 Malloc3 00:22:51.914 Malloc4 00:22:51.914 Malloc5 00:22:51.914 Malloc6 00:22:52.174 Malloc7 00:22:52.174 Malloc8 00:22:52.174 Malloc9 00:22:52.174 Malloc10 00:22:52.174 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.174 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:52.174 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.174 20:58:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1655545 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1655545 /var/tmp/bdevperf.sock 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1655545 ']' 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.174 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.175 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.175 { 00:22:52.175 "params": { 00:22:52.175 "name": "Nvme$subsystem", 00:22:52.175 "trtype": "$TEST_TRANSPORT", 00:22:52.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.175 "adrfam": "ipv4", 00:22:52.175 "trsvcid": "$NVMF_PORT", 00:22:52.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.175 "hdgst": ${hdgst:-false}, 00:22:52.175 "ddgst": ${ddgst:-false} 00:22:52.175 }, 00:22:52.175 "method": "bdev_nvme_attach_controller" 00:22:52.175 } 00:22:52.175 EOF 00:22:52.175 )") 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.435 [2024-07-15 20:58:56.070300] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:22:52.435 [2024-07-15 20:58:56.070354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.435 { 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme$subsystem", 00:22:52.435 "trtype": "$TEST_TRANSPORT", 00:22:52.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "$NVMF_PORT", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.435 "hdgst": ${hdgst:-false}, 00:22:52.435 "ddgst": ${ddgst:-false} 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 } 00:22:52.435 EOF 00:22:52.435 )") 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.435 { 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme$subsystem", 00:22:52.435 "trtype": "$TEST_TRANSPORT", 00:22:52.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "$NVMF_PORT", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.435 "hdgst": ${hdgst:-false}, 00:22:52.435 "ddgst": ${ddgst:-false} 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 } 00:22:52.435 EOF 00:22:52.435 )") 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.435 { 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme$subsystem", 00:22:52.435 "trtype": "$TEST_TRANSPORT", 00:22:52.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "$NVMF_PORT", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.435 "hdgst": ${hdgst:-false}, 00:22:52.435 "ddgst": ${ddgst:-false} 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 } 00:22:52.435 EOF 00:22:52.435 )") 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.435 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.435 { 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme$subsystem", 00:22:52.435 "trtype": "$TEST_TRANSPORT", 00:22:52.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "$NVMF_PORT", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.435 "hdgst": ${hdgst:-false}, 00:22:52.435 "ddgst": ${ddgst:-false} 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 } 00:22:52.435 EOF 00:22:52.435 )") 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:52.435 20:58:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme1", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme2", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme3", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme4", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme5", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme6", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme7", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme8", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme9", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 },{ 00:22:52.435 "params": { 00:22:52.435 "name": "Nvme10", 00:22:52.435 "trtype": "tcp", 00:22:52.435 "traddr": "10.0.0.2", 00:22:52.435 "adrfam": "ipv4", 00:22:52.435 "trsvcid": "4420", 00:22:52.435 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:52.435 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:52.435 "hdgst": false, 00:22:52.435 "ddgst": false 00:22:52.435 }, 00:22:52.435 "method": "bdev_nvme_attach_controller" 00:22:52.435 }' 00:22:52.435 [2024-07-15 20:58:56.130391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.435 [2024-07-15 20:58:56.195156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1655545 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:53.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1655545 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:53.818 20:58:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1655235 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.758 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.758 { 00:22:54.759 "params": { 00:22:54.759 "name": "Nvme$subsystem", 00:22:54.759 "trtype": "$TEST_TRANSPORT", 00:22:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.759 "adrfam": "ipv4", 00:22:54.759 "trsvcid": "$NVMF_PORT", 00:22:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.759 "hdgst": ${hdgst:-false}, 00:22:54.759 "ddgst": ${ddgst:-false} 00:22:54.759 }, 00:22:54.759 "method": "bdev_nvme_attach_controller" 00:22:54.759 } 00:22:54.759 EOF 00:22:54.759 )") 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.759 { 00:22:54.759 "params": { 00:22:54.759 "name": "Nvme$subsystem", 00:22:54.759 "trtype": "$TEST_TRANSPORT", 00:22:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.759 "adrfam": "ipv4", 00:22:54.759 "trsvcid": "$NVMF_PORT", 00:22:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.759 "hdgst": ${hdgst:-false}, 00:22:54.759 "ddgst": ${ddgst:-false} 00:22:54.759 }, 00:22:54.759 "method": "bdev_nvme_attach_controller" 00:22:54.759 } 00:22:54.759 EOF 00:22:54.759 )") 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.759 { 00:22:54.759 "params": { 00:22:54.759 "name": "Nvme$subsystem", 00:22:54.759 "trtype": "$TEST_TRANSPORT", 00:22:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.759 "adrfam": "ipv4", 00:22:54.759 "trsvcid": "$NVMF_PORT", 00:22:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.759 "hdgst": ${hdgst:-false}, 00:22:54.759 "ddgst": ${ddgst:-false} 00:22:54.759 }, 00:22:54.759 "method": "bdev_nvme_attach_controller" 00:22:54.759 } 00:22:54.759 EOF 00:22:54.759 )") 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.759 { 00:22:54.759 "params": { 00:22:54.759 "name": "Nvme$subsystem", 00:22:54.759 "trtype": "$TEST_TRANSPORT", 00:22:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.759 "adrfam": "ipv4", 00:22:54.759 "trsvcid": "$NVMF_PORT", 00:22:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.759 "hdgst": ${hdgst:-false}, 00:22:54.759 "ddgst": ${ddgst:-false} 00:22:54.759 }, 00:22:54.759 "method": "bdev_nvme_attach_controller" 00:22:54.759 } 00:22:54.759 EOF 00:22:54.759 )") 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.759 { 00:22:54.759 "params": { 00:22:54.759 "name": "Nvme$subsystem", 00:22:54.759 "trtype": "$TEST_TRANSPORT", 00:22:54.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.759 "adrfam": "ipv4", 00:22:54.759 "trsvcid": "$NVMF_PORT", 00:22:54.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.759 "hdgst": ${hdgst:-false}, 00:22:54.759 "ddgst": ${ddgst:-false} 00:22:54.759 }, 00:22:54.759 "method": "bdev_nvme_attach_controller" 00:22:54.759 } 00:22:54.759 EOF 00:22:54.759 )") 00:22:54.759 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.019 { 00:22:55.019 "params": { 00:22:55.019 "name": "Nvme$subsystem", 00:22:55.019 "trtype": "$TEST_TRANSPORT", 00:22:55.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.019 "adrfam": "ipv4", 00:22:55.019 "trsvcid": "$NVMF_PORT", 00:22:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.019 "hdgst": ${hdgst:-false}, 00:22:55.019 "ddgst": ${ddgst:-false} 00:22:55.019 }, 00:22:55.019 "method": "bdev_nvme_attach_controller" 00:22:55.019 } 00:22:55.019 EOF 00:22:55.019 )") 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.019 { 00:22:55.019 "params": { 00:22:55.019 "name": "Nvme$subsystem", 00:22:55.019 "trtype": "$TEST_TRANSPORT", 00:22:55.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.019 "adrfam": "ipv4", 00:22:55.019 "trsvcid": "$NVMF_PORT", 00:22:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.019 "hdgst": ${hdgst:-false}, 00:22:55.019 "ddgst": ${ddgst:-false} 00:22:55.019 }, 00:22:55.019 "method": "bdev_nvme_attach_controller" 00:22:55.019 } 00:22:55.019 EOF 00:22:55.019 )") 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.019 { 00:22:55.019 "params": { 00:22:55.019 "name": "Nvme$subsystem", 00:22:55.019 "trtype": "$TEST_TRANSPORT", 00:22:55.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.019 "adrfam": "ipv4", 00:22:55.019 "trsvcid": "$NVMF_PORT", 00:22:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.019 "hdgst": ${hdgst:-false}, 00:22:55.019 "ddgst": ${ddgst:-false} 00:22:55.019 }, 00:22:55.019 "method": "bdev_nvme_attach_controller" 00:22:55.019 } 00:22:55.019 EOF 00:22:55.019 )") 00:22:55.019 [2024-07-15 20:58:58.668728] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:22:55.019 [2024-07-15 20:58:58.668784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656231 ] 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.019 { 00:22:55.019 "params": { 00:22:55.019 "name": "Nvme$subsystem", 00:22:55.019 "trtype": "$TEST_TRANSPORT", 00:22:55.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.019 "adrfam": "ipv4", 00:22:55.019 "trsvcid": "$NVMF_PORT", 00:22:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.019 "hdgst": ${hdgst:-false}, 00:22:55.019 "ddgst": ${ddgst:-false} 00:22:55.019 }, 00:22:55.019 "method": "bdev_nvme_attach_controller" 00:22:55.019 } 00:22:55.019 EOF 00:22:55.019 )") 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.019 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:55.020 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:55.020 { 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme$subsystem", 00:22:55.020 "trtype": "$TEST_TRANSPORT", 00:22:55.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "$NVMF_PORT", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.020 "hdgst": ${hdgst:-false}, 00:22:55.020 "ddgst": ${ddgst:-false} 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 } 00:22:55.020 EOF 00:22:55.020 )") 00:22:55.020 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:55.020 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:55.020 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:55.020 20:58:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme1", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme2", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme3", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme4", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme5", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme6", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme7", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme8", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme9", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 },{ 00:22:55.020 "params": { 00:22:55.020 "name": "Nvme10", 00:22:55.020 "trtype": "tcp", 00:22:55.020 "traddr": "10.0.0.2", 00:22:55.020 "adrfam": "ipv4", 00:22:55.020 "trsvcid": "4420", 00:22:55.020 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:55.020 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:55.020 "hdgst": false, 00:22:55.020 "ddgst": false 00:22:55.020 }, 00:22:55.020 "method": "bdev_nvme_attach_controller" 00:22:55.020 }' 00:22:55.020 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.020 [2024-07-15 20:58:58.728784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.020 [2024-07-15 20:58:58.792652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.449 Running I/O for 1 seconds... 00:22:57.827 00:22:57.827 Latency(us) 00:22:57.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme1n1 : 1.04 184.60 11.54 0.00 0.00 342905.17 23920.64 272629.76 00:22:57.827 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme2n1 : 1.03 248.24 15.51 0.00 0.00 250125.23 19223.89 246415.36 00:22:57.827 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme3n1 : 1.14 224.68 14.04 0.00 0.00 267367.25 39321.60 203598.51 00:22:57.827 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme4n1 : 1.15 222.49 13.91 0.00 0.00 269455.79 21626.88 251658.24 00:22:57.827 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme5n1 : 1.20 267.59 16.72 0.00 0.00 221278.55 23156.05 244667.73 00:22:57.827 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme6n1 : 1.20 266.60 16.66 0.00 0.00 218263.21 22828.37 267386.88 00:22:57.827 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme7n1 : 1.15 277.47 17.34 0.00 0.00 205016.58 18459.31 244667.73 00:22:57.827 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme8n1 : 1.19 215.01 13.44 0.00 0.00 260797.44 22937.60 295348.91 00:22:57.827 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme9n1 : 1.21 263.82 16.49 0.00 0.00 209154.56 22500.69 263891.63 00:22:57.827 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.827 Verification LBA range: start 0x0 length 0x400 00:22:57.827 Nvme10n1 : 1.23 260.70 16.29 0.00 0.00 208270.51 15291.73 272629.76 00:22:57.827 =================================================================================================================== 00:22:57.827 Total : 2431.20 151.95 0.00 0.00 239309.44 15291.73 295348.91 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.827 rmmod nvme_tcp 00:22:57.827 rmmod nvme_fabrics 00:22:57.827 rmmod nvme_keyring 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1655235 ']' 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1655235 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1655235 ']' 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1655235 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1655235 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1655235' 00:22:57.827 killing process with pid 1655235 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1655235 00:22:57.827 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1655235 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.086 20:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.626 00:23:00.626 real 0m16.354s 00:23:00.626 user 0m33.944s 00:23:00.626 sys 0m6.431s 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.626 ************************************ 00:23:00.626 END TEST nvmf_shutdown_tc1 00:23:00.626 ************************************ 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.626 20:59:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.626 ************************************ 00:23:00.626 START TEST nvmf_shutdown_tc2 00:23:00.626 ************************************ 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.626 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:00.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:00.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:00.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:00.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.627 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:23:00.627 00:23:00.627 --- 10.0.0.2 ping statistics --- 00:23:00.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.627 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:23:00.628 00:23:00.628 --- 10.0.0.1 ping statistics --- 00:23:00.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.628 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1657346 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1657346 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1657346 ']' 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.628 20:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.628 [2024-07-15 20:59:04.465521] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:00.628 [2024-07-15 20:59:04.465584] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.888 [2024-07-15 20:59:04.550451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.888 [2024-07-15 20:59:04.612264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.888 [2024-07-15 20:59:04.612297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.888 [2024-07-15 20:59:04.612302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.888 [2024-07-15 20:59:04.612307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.888 [2024-07-15 20:59:04.612315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.888 [2024-07-15 20:59:04.612422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.888 [2024-07-15 20:59:04.612584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.888 [2024-07-15 20:59:04.612711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.888 [2024-07-15 20:59:04.612713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.460 [2024-07-15 20:59:05.290344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.460 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.721 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:01.721 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.721 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.721 Malloc1 00:23:01.721 [2024-07-15 20:59:05.389015] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.721 Malloc2 00:23:01.721 Malloc3 00:23:01.721 Malloc4 00:23:01.721 Malloc5 00:23:01.721 Malloc6 00:23:01.721 Malloc7 00:23:01.983 Malloc8 00:23:01.983 Malloc9 00:23:01.983 Malloc10 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1657725 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1657725 /var/tmp/bdevperf.sock 00:23:01.983 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1657725 ']' 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 [2024-07-15 20:59:05.830895] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:01.984 [2024-07-15 20:59:05.830950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657725 ] 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.984 { 00:23:01.984 "params": { 00:23:01.984 "name": "Nvme$subsystem", 00:23:01.984 "trtype": "$TEST_TRANSPORT", 00:23:01.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.984 "adrfam": "ipv4", 00:23:01.984 "trsvcid": "$NVMF_PORT", 00:23:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.984 "hdgst": ${hdgst:-false}, 00:23:01.984 "ddgst": ${ddgst:-false} 00:23:01.984 }, 00:23:01.984 "method": "bdev_nvme_attach_controller" 00:23:01.984 } 00:23:01.984 EOF 00:23:01.984 )") 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:01.984 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:01.984 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.985 20:59:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme1", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme2", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme3", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme4", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme5", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme6", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme7", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme8", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme9", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 },{ 00:23:01.985 "params": { 00:23:01.985 "name": "Nvme10", 00:23:01.985 "trtype": "tcp", 00:23:01.985 "traddr": "10.0.0.2", 00:23:01.985 "adrfam": "ipv4", 00:23:01.985 "trsvcid": "4420", 00:23:01.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.985 "hdgst": false, 00:23:01.985 "ddgst": false 00:23:01.985 }, 00:23:01.985 "method": "bdev_nvme_attach_controller" 00:23:01.985 }' 00:23:02.247 [2024-07-15 20:59:05.890853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.247 [2024-07-15 20:59:05.955404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.646 Running I/O for 10 seconds... 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.646 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.907 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.907 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:03.907 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:03.907 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:04.168 20:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1657725 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1657725 ']' 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1657725 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1657725 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1657725' 00:23:04.429 killing process with pid 1657725 00:23:04.429 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1657725 00:23:04.430 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1657725 00:23:04.430 Received shutdown signal, test time was about 0.964646 seconds 00:23:04.430 00:23:04.430 Latency(us) 00:23:04.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme1n1 : 0.92 209.03 13.06 0.00 0.00 302103.89 21408.43 262144.00 00:23:04.430 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme2n1 : 0.95 268.49 16.78 0.00 0.00 230535.04 21736.11 221074.77 00:23:04.430 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme3n1 : 0.95 269.09 16.82 0.00 0.00 224609.07 20643.84 244667.73 00:23:04.430 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme4n1 : 0.94 204.66 12.79 0.00 0.00 289083.16 21845.33 253405.87 00:23:04.430 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme5n1 : 0.94 271.67 16.98 0.00 0.00 213186.13 17148.59 242920.11 00:23:04.430 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme6n1 : 0.94 204.45 12.78 0.00 0.00 276690.77 23374.51 255153.49 00:23:04.430 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme7n1 : 0.96 266.29 16.64 0.00 0.00 208369.92 23811.41 230686.72 00:23:04.430 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme8n1 : 0.93 206.60 12.91 0.00 0.00 260660.34 23265.28 253405.87 00:23:04.430 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme9n1 : 0.96 265.63 16.60 0.00 0.00 199306.45 22719.15 270882.13 00:23:04.430 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.430 Verification LBA range: start 0x0 length 0x400 00:23:04.430 Nvme10n1 : 0.95 202.81 12.68 0.00 0.00 253707.66 21954.56 276125.01 00:23:04.430 =================================================================================================================== 00:23:04.430 Total : 2368.72 148.04 0.00 0.00 241450.40 17148.59 276125.01 00:23:04.691 20:59:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1657346 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.634 rmmod nvme_tcp 00:23:05.634 rmmod nvme_fabrics 00:23:05.634 rmmod nvme_keyring 00:23:05.634 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1657346 ']' 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1657346 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1657346 ']' 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1657346 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1657346 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1657346' 00:23:05.895 killing process with pid 1657346 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1657346 00:23:05.895 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1657346 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.156 20:59:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:08.071 00:23:08.071 real 0m7.855s 00:23:08.071 user 0m23.548s 00:23:08.071 sys 0m1.264s 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.071 ************************************ 00:23:08.071 END TEST nvmf_shutdown_tc2 00:23:08.071 ************************************ 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.071 20:59:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:08.335 ************************************ 00:23:08.335 START TEST nvmf_shutdown_tc3 00:23:08.335 ************************************ 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.335 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:08.336 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:08.336 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:08.336 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:08.336 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.336 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.337 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.337 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.337 20:59:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.337 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:23:08.598 00:23:08.598 --- 10.0.0.2 ping statistics --- 00:23:08.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.598 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:23:08.598 00:23:08.598 --- 10.0.0.1 ping statistics --- 00:23:08.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.598 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1659183 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1659183 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1659183 ']' 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.598 20:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.598 [2024-07-15 20:59:12.427329] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:08.598 [2024-07-15 20:59:12.427392] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.598 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.859 [2024-07-15 20:59:12.513050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.859 [2024-07-15 20:59:12.574811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.859 [2024-07-15 20:59:12.574845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.859 [2024-07-15 20:59:12.574850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.859 [2024-07-15 20:59:12.574855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.859 [2024-07-15 20:59:12.574859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.859 [2024-07-15 20:59:12.574971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.859 [2024-07-15 20:59:12.575151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.859 [2024-07-15 20:59:12.575245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.859 [2024-07-15 20:59:12.575246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 [2024-07-15 20:59:13.249330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.431 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.432 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 Malloc1 00:23:09.692 [2024-07-15 20:59:13.348011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.692 Malloc2 00:23:09.692 Malloc3 00:23:09.692 Malloc4 00:23:09.692 Malloc5 00:23:09.692 Malloc6 00:23:09.692 Malloc7 00:23:09.953 Malloc8 00:23:09.953 Malloc9 00:23:09.953 Malloc10 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1659419 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1659419 /var/tmp/bdevperf.sock 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1659419 ']' 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.953 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.953 { 00:23:09.953 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 [2024-07-15 20:59:13.788928] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:09.954 [2024-07-15 20:59:13.788981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659419 ] 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.954 { 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme$subsystem", 00:23:09.954 "trtype": "$TEST_TRANSPORT", 00:23:09.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "$NVMF_PORT", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.954 "hdgst": ${hdgst:-false}, 00:23:09.954 "ddgst": ${ddgst:-false} 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 } 00:23:09.954 EOF 00:23:09.954 )") 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.954 20:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme1", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme2", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme3", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme4", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme5", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme6", 00:23:09.954 "trtype": "tcp", 00:23:09.954 "traddr": "10.0.0.2", 00:23:09.954 "adrfam": "ipv4", 00:23:09.954 "trsvcid": "4420", 00:23:09.954 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.954 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.954 "hdgst": false, 00:23:09.954 "ddgst": false 00:23:09.954 }, 00:23:09.954 "method": "bdev_nvme_attach_controller" 00:23:09.954 },{ 00:23:09.954 "params": { 00:23:09.954 "name": "Nvme7", 00:23:09.954 "trtype": "tcp", 00:23:09.955 "traddr": "10.0.0.2", 00:23:09.955 "adrfam": "ipv4", 00:23:09.955 "trsvcid": "4420", 00:23:09.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.955 "hdgst": false, 00:23:09.955 "ddgst": false 00:23:09.955 }, 00:23:09.955 "method": "bdev_nvme_attach_controller" 00:23:09.955 },{ 00:23:09.955 "params": { 00:23:09.955 "name": "Nvme8", 00:23:09.955 "trtype": "tcp", 00:23:09.955 "traddr": "10.0.0.2", 00:23:09.955 "adrfam": "ipv4", 00:23:09.955 "trsvcid": "4420", 00:23:09.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.955 "hdgst": false, 00:23:09.955 "ddgst": false 00:23:09.955 }, 00:23:09.955 "method": "bdev_nvme_attach_controller" 00:23:09.955 },{ 00:23:09.955 "params": { 00:23:09.955 "name": "Nvme9", 00:23:09.955 "trtype": "tcp", 00:23:09.955 "traddr": "10.0.0.2", 00:23:09.955 "adrfam": "ipv4", 00:23:09.955 "trsvcid": "4420", 00:23:09.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.955 "hdgst": false, 00:23:09.955 "ddgst": false 00:23:09.955 }, 00:23:09.955 "method": "bdev_nvme_attach_controller" 00:23:09.955 },{ 00:23:09.955 "params": { 00:23:09.955 "name": "Nvme10", 00:23:09.955 "trtype": "tcp", 00:23:09.955 "traddr": "10.0.0.2", 00:23:09.955 "adrfam": "ipv4", 00:23:09.955 "trsvcid": "4420", 00:23:09.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.955 "hdgst": false, 00:23:09.955 "ddgst": false 00:23:09.955 }, 00:23:09.955 "method": "bdev_nvme_attach_controller" 00:23:09.955 }' 00:23:10.214 [2024-07-15 20:59:13.848271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.214 [2024-07-15 20:59:13.913112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.598 Running I/O for 10 seconds... 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:11.598 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.859 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.120 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.120 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:12.120 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:12.120 20:59:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1659183 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1659183 ']' 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1659183 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1659183 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1659183' 00:23:12.395 killing process with pid 1659183 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1659183 00:23:12.395 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1659183 00:23:12.395 [2024-07-15 20:59:16.150714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.150997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7ae0 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.395 [2024-07-15 20:59:16.151941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.151992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.152212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8c50 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.396 [2024-07-15 20:59:16.153914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.153996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf8420 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf88e0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2efb0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2efb0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.154856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2efb0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.155564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f450 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.156315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.156328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.156333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.156338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.397 [2024-07-15 20:59:16.156342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.156609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f8f0 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.398 [2024-07-15 20:59:16.157437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.157598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe30250 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.164589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ca0 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.164728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6340 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.164813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712a70 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.164897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.399 [2024-07-15 20:59:16.164962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.399 [2024-07-15 20:59:16.164969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f660 is same with the state(5) to be set 00:23:12.399 [2024-07-15 20:59:16.164996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15745d0 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.165077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711e90 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.165170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17343a0 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.165256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1030 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.165340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740210 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.165419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.400 [2024-07-15 20:59:16.165474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.165481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748990 is same with the state(5) to be set 00:23:12.400 [2024-07-15 20:59:16.168150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.400 [2024-07-15 20:59:16.168397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.400 [2024-07-15 20:59:16.168406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.168987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.168994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.401 [2024-07-15 20:59:16.169091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.401 [2024-07-15 20:59:16.169100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169277] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x156ef80 was disconnected and freed. reset controller. 00:23:12.402 [2024-07-15 20:59:16.169608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.169988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.169995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.402 [2024-07-15 20:59:16.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.402 [2024-07-15 20:59:16.170207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.170310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.170318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.177855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.177922] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x161f1a0 was disconnected and freed. reset controller. 00:23:12.403 [2024-07-15 20:59:16.178013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.403 [2024-07-15 20:59:16.178140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.403 [2024-07-15 20:59:16.178149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.404 [2024-07-15 20:59:16.178831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.404 [2024-07-15 20:59:16.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.178985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.178992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1621b50 was disconnected and freed. reset controller. 00:23:12.405 [2024-07-15 20:59:16.179133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.405 [2024-07-15 20:59:16.179575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.405 [2024-07-15 20:59:16.179581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.179990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.179998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.406 [2024-07-15 20:59:16.180163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.406 [2024-07-15 20:59:16.180211] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x156daf0 was disconnected and freed. reset controller. 00:23:12.406 [2024-07-15 20:59:16.181646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:12.406 [2024-07-15 20:59:16.181679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711e90 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ca0 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c6340 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1712a70 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173f660 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15745d0 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17343a0 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b1030 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1740210 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.181846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1748990 (9): Bad file descriptor 00:23:12.406 [2024-07-15 20:59:16.187091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:12.406 [2024-07-15 20:59:16.187121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:12.406 [2024-07-15 20:59:16.187140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:12.407 [2024-07-15 20:59:16.187222] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.407 [2024-07-15 20:59:16.187647] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16cc6b0 was disconnected and freed. reset controller. 00:23:12.407 [2024-07-15 20:59:16.187692] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.407 [2024-07-15 20:59:16.187738] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.407 [2024-07-15 20:59:16.188379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.407 [2024-07-15 20:59:16.188418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1711e90 with addr=10.0.0.2, port=4420 00:23:12.407 [2024-07-15 20:59:16.188429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711e90 is same with the state(5) to be set 00:23:12.407 [2024-07-15 20:59:16.188886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.407 [2024-07-15 20:59:16.188897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1740210 with addr=10.0.0.2, port=4420 00:23:12.407 [2024-07-15 20:59:16.188904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740210 is same with the state(5) to be set 00:23:12.407 [2024-07-15 20:59:16.189444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.407 [2024-07-15 20:59:16.189481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0ca0 with addr=10.0.0.2, port=4420 00:23:12.407 [2024-07-15 20:59:16.189492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ca0 is same with the state(5) to be set 00:23:12.407 [2024-07-15 20:59:16.189613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.407 [2024-07-15 20:59:16.189624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b1030 with addr=10.0.0.2, port=4420 00:23:12.407 [2024-07-15 20:59:16.189631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1030 is same with the state(5) to be set 00:23:12.407 [2024-07-15 20:59:16.189952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.189966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.189982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.407 [2024-07-15 20:59:16.190451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.407 [2024-07-15 20:59:16.190458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.190989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.190998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1620630 is same with the state(5) to be set 00:23:12.408 [2024-07-15 20:59:16.191075] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1620630 was disconnected and freed. reset controller. 00:23:12.408 [2024-07-15 20:59:16.191631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.408 [2024-07-15 20:59:16.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.408 [2024-07-15 20:59:16.191745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.191989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.191996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.409 [2024-07-15 20:59:16.192428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.409 [2024-07-15 20:59:16.192437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.192692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.192700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15703f0 is same with the state(5) to be set 00:23:12.410 [2024-07-15 20:59:16.192743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15703f0 was disconnected and freed. reset controller. 00:23:12.410 [2024-07-15 20:59:16.193048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:12.410 [2024-07-15 20:59:16.193084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711e90 (9): Bad file descriptor 00:23:12.410 [2024-07-15 20:59:16.193096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1740210 (9): Bad file descriptor 00:23:12.410 [2024-07-15 20:59:16.193105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ca0 (9): Bad file descriptor 00:23:12.410 [2024-07-15 20:59:16.193114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b1030 (9): Bad file descriptor 00:23:12.410 [2024-07-15 20:59:16.193158] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.410 [2024-07-15 20:59:16.193195] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.410 [2024-07-15 20:59:16.195677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:12.410 [2024-07-15 20:59:16.195693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:12.410 [2024-07-15 20:59:16.196132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.410 [2024-07-15 20:59:16.196146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712a70 with addr=10.0.0.2, port=4420 00:23:12.410 [2024-07-15 20:59:16.196154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712a70 is same with the state(5) to be set 00:23:12.410 [2024-07-15 20:59:16.196161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:12.410 [2024-07-15 20:59:16.196168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:12.410 [2024-07-15 20:59:16.196176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:12.410 [2024-07-15 20:59:16.196188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:12.410 [2024-07-15 20:59:16.196194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:12.410 [2024-07-15 20:59:16.196200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:12.410 [2024-07-15 20:59:16.196211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:12.410 [2024-07-15 20:59:16.196218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:12.410 [2024-07-15 20:59:16.196224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:12.410 [2024-07-15 20:59:16.196235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:12.410 [2024-07-15 20:59:16.196241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:12.410 [2024-07-15 20:59:16.196252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:12.410 [2024-07-15 20:59:16.196288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.410 [2024-07-15 20:59:16.196526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.410 [2024-07-15 20:59:16.196535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.196990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.196998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.411 [2024-07-15 20:59:16.197112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.411 [2024-07-15 20:59:16.197121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.197323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.197331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630620 is same with the state(5) to be set 00:23:12.412 [2024-07-15 20:59:16.198701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.198995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.412 [2024-07-15 20:59:16.199181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.412 [2024-07-15 20:59:16.199190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.199754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.199762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cda30 is same with the state(5) to be set 00:23:12.413 [2024-07-15 20:59:16.202655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.202704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.202722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.202738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.202754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.413 [2024-07-15 20:59:16.202770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.413 [2024-07-15 20:59:16.202778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.202986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.202993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.414 [2024-07-15 20:59:16.203490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.414 [2024-07-15 20:59:16.203497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.415 [2024-07-15 20:59:16.203743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.415 [2024-07-15 20:59:16.203752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ceef0 is same with the state(5) to be set 00:23:12.415 [2024-07-15 20:59:16.205268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.415 [2024-07-15 20:59:16.205289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.415 [2024-07-15 20:59:16.205295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.415 [2024-07-15 20:59:16.205302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.415 [2024-07-15 20:59:16.205311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.415 [2024-07-15 20:59:16.205323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:12.415 [2024-07-15 20:59:16.205807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.415 [2024-07-15 20:59:16.205821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1748990 with addr=10.0.0.2, port=4420 00:23:12.415 [2024-07-15 20:59:16.205829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748990 is same with the state(5) to be set 00:23:12.415 [2024-07-15 20:59:16.206258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.415 [2024-07-15 20:59:16.206269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c6340 with addr=10.0.0.2, port=4420 00:23:12.415 [2024-07-15 20:59:16.206276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6340 is same with the state(5) to be set 00:23:12.415 [2024-07-15 20:59:16.206287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1712a70 (9): Bad file descriptor 00:23:12.415 [2024-07-15 20:59:16.206340] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.415 [2024-07-15 20:59:16.206357] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.415 task offset: 24576 on job bdev=Nvme6n1 fails 00:23:12.415 00:23:12.415 Latency(us) 00:23:12.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme1n1 ended in about 0.97 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme1n1 : 0.97 132.40 8.27 66.20 0.00 318878.72 22937.60 295348.91 00:23:12.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme2n1 : 0.95 208.12 13.01 67.27 0.00 225202.41 6144.00 244667.73 00:23:12.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme3n1 : 0.96 207.75 12.98 66.48 0.00 221595.57 10103.47 249910.61 00:23:12.415 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme4n1 : 0.95 201.56 12.60 67.19 0.00 221295.57 16602.45 232434.35 00:23:12.415 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme5n1 : 0.95 201.32 12.58 67.11 0.00 216872.75 16384.00 248162.99 00:23:12.415 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme6n1 : 0.95 202.16 12.64 67.39 0.00 211086.93 16056.32 223696.21 00:23:12.415 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme7n1 : 0.96 199.19 12.45 66.40 0.00 209891.20 23156.05 262144.00 00:23:12.415 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme8n1 : 0.96 200.99 12.56 0.00 0.00 270801.07 22609.92 291853.65 00:23:12.415 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme9n1 : 0.97 132.06 8.25 66.03 0.00 269102.08 43253.76 227191.47 00:23:12.415 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.415 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:12.415 Verification LBA range: start 0x0 length 0x400 00:23:12.415 Nvme10n1 : 0.97 131.52 8.22 65.76 0.00 264286.72 23156.05 270882.13 00:23:12.415 =================================================================================================================== 00:23:12.415 Total : 1817.07 113.57 599.82 0.00 238600.05 6144.00 295348.91 00:23:12.415 [2024-07-15 20:59:16.231981] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:12.415 [2024-07-15 20:59:16.232029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:12.415 [2024-07-15 20:59:16.232546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.415 [2024-07-15 20:59:16.232564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15745d0 with addr=10.0.0.2, port=4420 00:23:12.415 [2024-07-15 20:59:16.232574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15745d0 is same with the state(5) to be set 00:23:12.415 [2024-07-15 20:59:16.232865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.415 [2024-07-15 20:59:16.232875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17343a0 with addr=10.0.0.2, port=4420 00:23:12.415 [2024-07-15 20:59:16.232888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17343a0 is same with the state(5) to be set 00:23:12.415 [2024-07-15 20:59:16.232900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1748990 (9): Bad file descriptor 00:23:12.415 [2024-07-15 20:59:16.232911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c6340 (9): Bad file descriptor 00:23:12.415 [2024-07-15 20:59:16.232920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:12.415 [2024-07-15 20:59:16.232926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:12.415 [2024-07-15 20:59:16.232934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:12.415 [2024-07-15 20:59:16.232952] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.415 [2024-07-15 20:59:16.232969] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.415 [2024-07-15 20:59:16.233815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:12.416 [2024-07-15 20:59:16.233829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:12.416 [2024-07-15 20:59:16.233837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:12.416 [2024-07-15 20:59:16.233847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:12.416 [2024-07-15 20:59:16.233856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.234313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.234326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173f660 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.234334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f660 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.234343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15745d0 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.234352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17343a0 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.234360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.234366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.234373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:12.416 [2024-07-15 20:59:16.234384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.234390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.234397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:12.416 [2024-07-15 20:59:16.234432] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.416 [2024-07-15 20:59:16.234446] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.416 [2024-07-15 20:59:16.234457] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.416 [2024-07-15 20:59:16.234467] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.416 [2024-07-15 20:59:16.234533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.234541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.234999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.235009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b1030 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.235017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1030 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.235301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.235311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0ca0 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.235318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ca0 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.235552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.235561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1740210 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.235568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740210 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.235839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.235848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1711e90 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.235855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711e90 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.235863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173f660 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.235871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.235877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.235883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.416 [2024-07-15 20:59:16.235893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.235899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.235906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:12.416 [2024-07-15 20:59:16.235954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:12.416 [2024-07-15 20:59:16.235964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.235970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.235983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b1030 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.235992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ca0 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.236001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1740210 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.236010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711e90 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.236018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.236491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.416 [2024-07-15 20:59:16.236501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1712a70 with addr=10.0.0.2, port=4420 00:23:12.416 [2024-07-15 20:59:16.236509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712a70 is same with the state(5) to be set 00:23:12.416 [2024-07-15 20:59:16.236515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.236630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.236636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.236642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.416 [2024-07-15 20:59:16.236649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1712a70 (9): Bad file descriptor 00:23:12.416 [2024-07-15 20:59:16.236675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:12.416 [2024-07-15 20:59:16.236681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:12.416 [2024-07-15 20:59:16.236688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:12.416 [2024-07-15 20:59:16.236715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.677 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:12.677 20:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1659419 00:23:13.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1659419) - No such process 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.620 rmmod nvme_tcp 00:23:13.620 rmmod nvme_fabrics 00:23:13.620 rmmod nvme_keyring 00:23:13.620 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.621 20:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.230 00:23:16.230 real 0m7.604s 00:23:16.230 user 0m17.908s 00:23:16.230 sys 0m1.299s 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 ************************************ 00:23:16.230 END TEST nvmf_shutdown_tc3 00:23:16.230 ************************************ 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:16.230 00:23:16.230 real 0m32.149s 00:23:16.230 user 1m15.523s 00:23:16.230 sys 0m9.220s 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.230 20:59:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 ************************************ 00:23:16.230 END TEST nvmf_shutdown 00:23:16.230 ************************************ 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:16.230 20:59:19 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 20:59:19 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 20:59:19 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:16.230 20:59:19 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.230 20:59:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 ************************************ 00:23:16.230 START TEST nvmf_multicontroller 00:23:16.230 ************************************ 00:23:16.230 20:59:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.230 * Looking for test storage... 00:23:16.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.230 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.231 20:59:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:22.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:22.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:22.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:22.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.858 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:23:23.120 00:23:23.120 --- 10.0.0.2 ping statistics --- 00:23:23.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.120 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:23.120 00:23:23.120 --- 10.0.0.1 ping statistics --- 00:23:23.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.120 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.120 20:59:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.120 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:23.120 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.120 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.120 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1664315 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1664315 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1664315 ']' 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.382 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.382 [2024-07-15 20:59:27.075211] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:23.382 [2024-07-15 20:59:27.075273] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.382 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.382 [2024-07-15 20:59:27.160794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:23.382 [2024-07-15 20:59:27.253945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.382 [2024-07-15 20:59:27.254003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.382 [2024-07-15 20:59:27.254011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.382 [2024-07-15 20:59:27.254018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.382 [2024-07-15 20:59:27.254024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.382 [2024-07-15 20:59:27.254168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.382 [2024-07-15 20:59:27.254262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.382 [2024-07-15 20:59:27.254416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.953 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.953 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:23.953 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.953 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.953 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 [2024-07-15 20:59:27.891899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 Malloc0 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 [2024-07-15 20:59:27.961545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 [2024-07-15 20:59:27.973480] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 Malloc1 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1664571 00:23:24.214 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1664571 /var/tmp/bdevperf.sock 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1664571 ']' 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.215 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 NVMe0n1 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.157 1 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 request: 00:23:25.157 { 00:23:25.157 "name": "NVMe0", 00:23:25.157 "trtype": "tcp", 00:23:25.157 "traddr": "10.0.0.2", 00:23:25.157 "adrfam": "ipv4", 00:23:25.157 "trsvcid": "4420", 00:23:25.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.157 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:25.157 "hostaddr": "10.0.0.2", 00:23:25.157 "hostsvcid": "60000", 00:23:25.157 "prchk_reftag": false, 00:23:25.157 "prchk_guard": false, 00:23:25.157 "hdgst": false, 00:23:25.157 "ddgst": false, 00:23:25.157 "method": "bdev_nvme_attach_controller", 00:23:25.157 "req_id": 1 00:23:25.157 } 00:23:25.157 Got JSON-RPC error response 00:23:25.157 response: 00:23:25.157 { 00:23:25.157 "code": -114, 00:23:25.157 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.157 } 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 request: 00:23:25.157 { 00:23:25.157 "name": "NVMe0", 00:23:25.157 "trtype": "tcp", 00:23:25.157 "traddr": "10.0.0.2", 00:23:25.157 "adrfam": "ipv4", 00:23:25.157 "trsvcid": "4420", 00:23:25.157 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.157 "hostaddr": "10.0.0.2", 00:23:25.157 "hostsvcid": "60000", 00:23:25.157 "prchk_reftag": false, 00:23:25.157 "prchk_guard": false, 00:23:25.157 "hdgst": false, 00:23:25.157 "ddgst": false, 00:23:25.157 "method": "bdev_nvme_attach_controller", 00:23:25.157 "req_id": 1 00:23:25.157 } 00:23:25.157 Got JSON-RPC error response 00:23:25.157 response: 00:23:25.157 { 00:23:25.157 "code": -114, 00:23:25.157 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.157 } 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.157 20:59:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 request: 00:23:25.157 { 00:23:25.157 "name": "NVMe0", 00:23:25.157 "trtype": "tcp", 00:23:25.157 "traddr": "10.0.0.2", 00:23:25.157 "adrfam": "ipv4", 00:23:25.157 "trsvcid": "4420", 00:23:25.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.157 "hostaddr": "10.0.0.2", 00:23:25.157 "hostsvcid": "60000", 00:23:25.157 "prchk_reftag": false, 00:23:25.157 "prchk_guard": false, 00:23:25.157 "hdgst": false, 00:23:25.157 "ddgst": false, 00:23:25.157 "multipath": "disable", 00:23:25.157 "method": "bdev_nvme_attach_controller", 00:23:25.157 "req_id": 1 00:23:25.157 } 00:23:25.157 Got JSON-RPC error response 00:23:25.157 response: 00:23:25.157 { 00:23:25.157 "code": -114, 00:23:25.157 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:25.157 } 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.157 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.157 request: 00:23:25.157 { 00:23:25.157 "name": "NVMe0", 00:23:25.157 "trtype": "tcp", 00:23:25.157 "traddr": "10.0.0.2", 00:23:25.157 "adrfam": "ipv4", 00:23:25.157 "trsvcid": "4420", 00:23:25.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.157 "hostaddr": "10.0.0.2", 00:23:25.157 "hostsvcid": "60000", 00:23:25.157 "prchk_reftag": false, 00:23:25.157 "prchk_guard": false, 00:23:25.157 "hdgst": false, 00:23:25.157 "ddgst": false, 00:23:25.157 "multipath": "failover", 00:23:25.157 "method": "bdev_nvme_attach_controller", 00:23:25.157 "req_id": 1 00:23:25.157 } 00:23:25.157 Got JSON-RPC error response 00:23:25.157 response: 00:23:25.157 { 00:23:25.157 "code": -114, 00:23:25.157 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.158 } 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.158 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.418 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.418 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.418 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.419 20:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.801 0 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1664571 ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1664571' 00:23:26.801 killing process with pid 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1664571 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:26.801 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.801 [2024-07-15 20:59:28.093637] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:26.801 [2024-07-15 20:59:28.093688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664571 ] 00:23:26.801 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.801 [2024-07-15 20:59:28.150996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.801 [2024-07-15 20:59:28.215729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.801 [2024-07-15 20:59:29.231167] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bf528083-3ced-4b7f-8898-c3c93288f7bb already exists 00:23:26.801 [2024-07-15 20:59:29.231194] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:bf528083-3ced-4b7f-8898-c3c93288f7bb alias for bdev NVMe1n1 00:23:26.801 [2024-07-15 20:59:29.231203] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.801 Running I/O for 1 seconds... 00:23:26.801 00:23:26.801 Latency(us) 00:23:26.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.801 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.801 NVMe0n1 : 1.00 20144.73 78.69 0.00 0.00 6336.54 4259.84 16820.91 00:23:26.801 =================================================================================================================== 00:23:26.801 Total : 20144.73 78.69 0.00 0.00 6336.54 4259.84 16820.91 00:23:26.801 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.801 00:23:26.801 Latency(us) 00:23:26.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.801 =================================================================================================================== 00:23:26.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.801 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.801 rmmod nvme_tcp 00:23:26.801 rmmod nvme_fabrics 00:23:26.801 rmmod nvme_keyring 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1664315 ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1664315 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1664315 ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1664315 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.801 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1664315 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1664315' 00:23:27.062 killing process with pid 1664315 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1664315 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1664315 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.062 20:59:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.605 20:59:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.605 00:23:29.605 real 0m13.217s 00:23:29.605 user 0m15.781s 00:23:29.605 sys 0m5.977s 00:23:29.605 20:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.605 20:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.605 ************************************ 00:23:29.605 END TEST nvmf_multicontroller 00:23:29.605 ************************************ 00:23:29.605 20:59:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:29.605 20:59:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.605 20:59:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:29.605 20:59:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.605 20:59:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.605 ************************************ 00:23:29.605 START TEST nvmf_aer 00:23:29.605 ************************************ 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.605 * Looking for test storage... 00:23:29.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.605 20:59:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:36.190 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:36.190 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:36.190 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:36.190 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.190 20:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.190 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.190 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:23:36.451 00:23:36.451 --- 10.0.0.2 ping statistics --- 00:23:36.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.451 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:23:36.451 00:23:36.451 --- 10.0.0.1 ping statistics --- 00:23:36.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.451 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1669206 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1669206 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1669206 ']' 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.451 20:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.711 [2024-07-15 20:59:40.351882] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:36.711 [2024-07-15 20:59:40.351955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.711 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.711 [2024-07-15 20:59:40.425215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.711 [2024-07-15 20:59:40.499567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.711 [2024-07-15 20:59:40.499609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.711 [2024-07-15 20:59:40.499616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.711 [2024-07-15 20:59:40.499623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.711 [2024-07-15 20:59:40.499628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.711 [2024-07-15 20:59:40.499779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.711 [2024-07-15 20:59:40.499891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.711 [2024-07-15 20:59:40.499918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.711 [2024-07-15 20:59:40.499920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.282 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.282 [2024-07-15 20:59:41.171743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.542 Malloc0 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.542 [2024-07-15 20:59:41.231198] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.542 [ 00:23:37.542 { 00:23:37.542 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.542 "subtype": "Discovery", 00:23:37.542 "listen_addresses": [], 00:23:37.542 "allow_any_host": true, 00:23:37.542 "hosts": [] 00:23:37.542 }, 00:23:37.542 { 00:23:37.542 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.542 "subtype": "NVMe", 00:23:37.542 "listen_addresses": [ 00:23:37.542 { 00:23:37.542 "trtype": "TCP", 00:23:37.542 "adrfam": "IPv4", 00:23:37.542 "traddr": "10.0.0.2", 00:23:37.542 "trsvcid": "4420" 00:23:37.542 } 00:23:37.542 ], 00:23:37.542 "allow_any_host": true, 00:23:37.542 "hosts": [], 00:23:37.542 "serial_number": "SPDK00000000000001", 00:23:37.542 "model_number": "SPDK bdev Controller", 00:23:37.542 "max_namespaces": 2, 00:23:37.542 "min_cntlid": 1, 00:23:37.542 "max_cntlid": 65519, 00:23:37.542 "namespaces": [ 00:23:37.542 { 00:23:37.542 "nsid": 1, 00:23:37.542 "bdev_name": "Malloc0", 00:23:37.542 "name": "Malloc0", 00:23:37.542 "nguid": "CE008D2CF0994C54AE6B87BE75A94C97", 00:23:37.542 "uuid": "ce008d2c-f099-4c54-ae6b-87be75a94c97" 00:23:37.542 } 00:23:37.542 ] 00:23:37.542 } 00:23:37.542 ] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1669377 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:37.542 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:37.542 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:37.803 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 Malloc1 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 [ 00:23:37.804 { 00:23:37.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.804 "subtype": "Discovery", 00:23:37.804 "listen_addresses": [], 00:23:37.804 "allow_any_host": true, 00:23:37.804 "hosts": [] 00:23:37.804 }, 00:23:37.804 { 00:23:37.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.804 "subtype": "NVMe", 00:23:37.804 "listen_addresses": [ 00:23:37.804 { 00:23:37.804 "trtype": "TCP", 00:23:37.804 "adrfam": "IPv4", 00:23:37.804 "traddr": "10.0.0.2", 00:23:37.804 "trsvcid": "4420" 00:23:37.804 } 00:23:37.804 ], 00:23:37.804 "allow_any_host": true, 00:23:37.804 "hosts": [], 00:23:37.804 "serial_number": "SPDK00000000000001", 00:23:37.804 "model_number": "SPDK bdev Controller", 00:23:37.804 "max_namespaces": 2, 00:23:37.804 "min_cntlid": 1, 00:23:37.804 "max_cntlid": 65519, 00:23:37.804 "namespaces": [ 00:23:37.804 { 00:23:37.804 "nsid": 1, 00:23:37.804 "bdev_name": "Malloc0", 00:23:37.804 "name": "Malloc0", 00:23:37.804 "nguid": "CE008D2CF0994C54AE6B87BE75A94C97", 00:23:37.804 "uuid": "ce008d2c-f099-4c54-ae6b-87be75a94c97" 00:23:37.804 }, 00:23:37.804 { 00:23:37.804 "nsid": 2, 00:23:37.804 "bdev_name": "Malloc1", 00:23:37.804 "name": "Malloc1", 00:23:37.804 "nguid": "99D22EC97F904685AFD42D3FA4149604", 00:23:37.804 "uuid": "99d22ec9-7f90-4685-afd4-2d3fa4149604" 00:23:37.804 } 00:23:37.804 ] 00:23:37.804 } 00:23:37.804 ] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1669377 00:23:37.804 Asynchronous Event Request test 00:23:37.804 Attaching to 10.0.0.2 00:23:37.804 Attached to 10.0.0.2 00:23:37.804 Registering asynchronous event callbacks... 00:23:37.804 Starting namespace attribute notice tests for all controllers... 00:23:37.804 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:37.804 aer_cb - Changed Namespace 00:23:37.804 Cleaning up... 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.804 rmmod nvme_tcp 00:23:37.804 rmmod nvme_fabrics 00:23:37.804 rmmod nvme_keyring 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1669206 ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1669206 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1669206 ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1669206 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.804 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1669206 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1669206' 00:23:38.066 killing process with pid 1669206 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1669206 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1669206 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.066 20:59:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.611 20:59:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:40.611 00:23:40.611 real 0m10.890s 00:23:40.611 user 0m7.431s 00:23:40.611 sys 0m5.735s 00:23:40.611 20:59:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.611 20:59:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.611 ************************************ 00:23:40.611 END TEST nvmf_aer 00:23:40.611 ************************************ 00:23:40.611 20:59:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:40.611 20:59:43 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.611 20:59:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:40.611 20:59:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.611 20:59:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.611 ************************************ 00:23:40.611 START TEST nvmf_async_init 00:23:40.611 ************************************ 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:40.611 * Looking for test storage... 00:23:40.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.611 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4d00f3e1c7d0498a86d8d3a72c3db838 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:40.612 20:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:47.247 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:47.247 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:47.247 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:47.248 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:47.248 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.248 20:59:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.248 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.248 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.248 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.248 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.248 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:23:47.509 00:23:47.509 --- 10.0.0.2 ping statistics --- 00:23:47.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.509 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:23:47.509 00:23:47.509 --- 10.0.0.1 ping statistics --- 00:23:47.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.509 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1673658 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1673658 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1673658 ']' 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.509 20:59:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.509 [2024-07-15 20:59:51.297395] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:47.509 [2024-07-15 20:59:51.297463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.509 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.509 [2024-07-15 20:59:51.370011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.770 [2024-07-15 20:59:51.444459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.770 [2024-07-15 20:59:51.444498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.770 [2024-07-15 20:59:51.444506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.770 [2024-07-15 20:59:51.444513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.770 [2024-07-15 20:59:51.444518] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.770 [2024-07-15 20:59:51.444546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.347 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.347 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:48.347 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 [2024-07-15 20:59:52.123535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 null0 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4d00f3e1c7d0498a86d8d3a72c3db838 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 [2024-07-15 20:59:52.183794] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.348 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.608 nvme0n1 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.608 [ 00:23:48.608 { 00:23:48.608 "name": "nvme0n1", 00:23:48.608 "aliases": [ 00:23:48.608 "4d00f3e1-c7d0-498a-86d8-d3a72c3db838" 00:23:48.608 ], 00:23:48.608 "product_name": "NVMe disk", 00:23:48.608 "block_size": 512, 00:23:48.608 "num_blocks": 2097152, 00:23:48.608 "uuid": "4d00f3e1-c7d0-498a-86d8-d3a72c3db838", 00:23:48.608 "assigned_rate_limits": { 00:23:48.608 "rw_ios_per_sec": 0, 00:23:48.608 "rw_mbytes_per_sec": 0, 00:23:48.608 "r_mbytes_per_sec": 0, 00:23:48.608 "w_mbytes_per_sec": 0 00:23:48.608 }, 00:23:48.608 "claimed": false, 00:23:48.608 "zoned": false, 00:23:48.608 "supported_io_types": { 00:23:48.608 "read": true, 00:23:48.608 "write": true, 00:23:48.608 "unmap": false, 00:23:48.608 "flush": true, 00:23:48.608 "reset": true, 00:23:48.608 "nvme_admin": true, 00:23:48.608 "nvme_io": true, 00:23:48.608 "nvme_io_md": false, 00:23:48.608 "write_zeroes": true, 00:23:48.608 "zcopy": false, 00:23:48.608 "get_zone_info": false, 00:23:48.608 "zone_management": false, 00:23:48.608 "zone_append": false, 00:23:48.608 "compare": true, 00:23:48.608 "compare_and_write": true, 00:23:48.608 "abort": true, 00:23:48.608 "seek_hole": false, 00:23:48.608 "seek_data": false, 00:23:48.608 "copy": true, 00:23:48.608 "nvme_iov_md": false 00:23:48.608 }, 00:23:48.608 "memory_domains": [ 00:23:48.608 { 00:23:48.608 "dma_device_id": "system", 00:23:48.608 "dma_device_type": 1 00:23:48.608 } 00:23:48.608 ], 00:23:48.608 "driver_specific": { 00:23:48.608 "nvme": [ 00:23:48.608 { 00:23:48.608 "trid": { 00:23:48.608 "trtype": "TCP", 00:23:48.608 "adrfam": "IPv4", 00:23:48.608 "traddr": "10.0.0.2", 00:23:48.608 "trsvcid": "4420", 00:23:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:48.608 }, 00:23:48.608 "ctrlr_data": { 00:23:48.608 "cntlid": 1, 00:23:48.608 "vendor_id": "0x8086", 00:23:48.608 "model_number": "SPDK bdev Controller", 00:23:48.608 "serial_number": "00000000000000000000", 00:23:48.608 "firmware_revision": "24.09", 00:23:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.608 "oacs": { 00:23:48.608 "security": 0, 00:23:48.608 "format": 0, 00:23:48.608 "firmware": 0, 00:23:48.608 "ns_manage": 0 00:23:48.608 }, 00:23:48.608 "multi_ctrlr": true, 00:23:48.608 "ana_reporting": false 00:23:48.608 }, 00:23:48.608 "vs": { 00:23:48.608 "nvme_version": "1.3" 00:23:48.608 }, 00:23:48.608 "ns_data": { 00:23:48.608 "id": 1, 00:23:48.608 "can_share": true 00:23:48.608 } 00:23:48.608 } 00:23:48.608 ], 00:23:48.608 "mp_policy": "active_passive" 00:23:48.608 } 00:23:48.608 } 00:23:48.608 ] 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.608 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.608 [2024-07-15 20:59:52.460364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.608 [2024-07-15 20:59:52.460426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2241df0 (9): Bad file descriptor 00:23:48.870 [2024-07-15 20:59:52.592219] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 [ 00:23:48.870 { 00:23:48.870 "name": "nvme0n1", 00:23:48.870 "aliases": [ 00:23:48.870 "4d00f3e1-c7d0-498a-86d8-d3a72c3db838" 00:23:48.870 ], 00:23:48.870 "product_name": "NVMe disk", 00:23:48.870 "block_size": 512, 00:23:48.870 "num_blocks": 2097152, 00:23:48.870 "uuid": "4d00f3e1-c7d0-498a-86d8-d3a72c3db838", 00:23:48.870 "assigned_rate_limits": { 00:23:48.870 "rw_ios_per_sec": 0, 00:23:48.870 "rw_mbytes_per_sec": 0, 00:23:48.870 "r_mbytes_per_sec": 0, 00:23:48.870 "w_mbytes_per_sec": 0 00:23:48.870 }, 00:23:48.870 "claimed": false, 00:23:48.870 "zoned": false, 00:23:48.870 "supported_io_types": { 00:23:48.870 "read": true, 00:23:48.870 "write": true, 00:23:48.870 "unmap": false, 00:23:48.870 "flush": true, 00:23:48.870 "reset": true, 00:23:48.870 "nvme_admin": true, 00:23:48.870 "nvme_io": true, 00:23:48.870 "nvme_io_md": false, 00:23:48.870 "write_zeroes": true, 00:23:48.870 "zcopy": false, 00:23:48.870 "get_zone_info": false, 00:23:48.870 "zone_management": false, 00:23:48.870 "zone_append": false, 00:23:48.870 "compare": true, 00:23:48.870 "compare_and_write": true, 00:23:48.870 "abort": true, 00:23:48.870 "seek_hole": false, 00:23:48.870 "seek_data": false, 00:23:48.870 "copy": true, 00:23:48.870 "nvme_iov_md": false 00:23:48.870 }, 00:23:48.870 "memory_domains": [ 00:23:48.870 { 00:23:48.870 "dma_device_id": "system", 00:23:48.870 "dma_device_type": 1 00:23:48.870 } 00:23:48.870 ], 00:23:48.870 "driver_specific": { 00:23:48.870 "nvme": [ 00:23:48.870 { 00:23:48.870 "trid": { 00:23:48.870 "trtype": "TCP", 00:23:48.870 "adrfam": "IPv4", 00:23:48.870 "traddr": "10.0.0.2", 00:23:48.870 "trsvcid": "4420", 00:23:48.870 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:48.870 }, 00:23:48.870 "ctrlr_data": { 00:23:48.870 "cntlid": 2, 00:23:48.870 "vendor_id": "0x8086", 00:23:48.870 "model_number": "SPDK bdev Controller", 00:23:48.870 "serial_number": "00000000000000000000", 00:23:48.870 "firmware_revision": "24.09", 00:23:48.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.870 "oacs": { 00:23:48.870 "security": 0, 00:23:48.870 "format": 0, 00:23:48.870 "firmware": 0, 00:23:48.870 "ns_manage": 0 00:23:48.870 }, 00:23:48.870 "multi_ctrlr": true, 00:23:48.870 "ana_reporting": false 00:23:48.870 }, 00:23:48.870 "vs": { 00:23:48.870 "nvme_version": "1.3" 00:23:48.870 }, 00:23:48.870 "ns_data": { 00:23:48.870 "id": 1, 00:23:48.870 "can_share": true 00:23:48.870 } 00:23:48.870 } 00:23:48.870 ], 00:23:48.870 "mp_policy": "active_passive" 00:23:48.870 } 00:23:48.870 } 00:23:48.870 ] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9Cs2IDGoR2 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9Cs2IDGoR2 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 [2024-07-15 20:59:52.660973] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.870 [2024-07-15 20:59:52.661093] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Cs2IDGoR2 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 [2024-07-15 20:59:52.672994] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Cs2IDGoR2 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.870 [2024-07-15 20:59:52.685046] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.870 [2024-07-15 20:59:52.685088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:48.870 nvme0n1 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.870 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.131 [ 00:23:49.131 { 00:23:49.131 "name": "nvme0n1", 00:23:49.131 "aliases": [ 00:23:49.131 "4d00f3e1-c7d0-498a-86d8-d3a72c3db838" 00:23:49.131 ], 00:23:49.131 "product_name": "NVMe disk", 00:23:49.131 "block_size": 512, 00:23:49.131 "num_blocks": 2097152, 00:23:49.131 "uuid": "4d00f3e1-c7d0-498a-86d8-d3a72c3db838", 00:23:49.131 "assigned_rate_limits": { 00:23:49.131 "rw_ios_per_sec": 0, 00:23:49.131 "rw_mbytes_per_sec": 0, 00:23:49.131 "r_mbytes_per_sec": 0, 00:23:49.131 "w_mbytes_per_sec": 0 00:23:49.131 }, 00:23:49.131 "claimed": false, 00:23:49.131 "zoned": false, 00:23:49.131 "supported_io_types": { 00:23:49.131 "read": true, 00:23:49.131 "write": true, 00:23:49.131 "unmap": false, 00:23:49.131 "flush": true, 00:23:49.131 "reset": true, 00:23:49.131 "nvme_admin": true, 00:23:49.131 "nvme_io": true, 00:23:49.131 "nvme_io_md": false, 00:23:49.131 "write_zeroes": true, 00:23:49.131 "zcopy": false, 00:23:49.131 "get_zone_info": false, 00:23:49.131 "zone_management": false, 00:23:49.131 "zone_append": false, 00:23:49.131 "compare": true, 00:23:49.131 "compare_and_write": true, 00:23:49.131 "abort": true, 00:23:49.131 "seek_hole": false, 00:23:49.131 "seek_data": false, 00:23:49.131 "copy": true, 00:23:49.131 "nvme_iov_md": false 00:23:49.131 }, 00:23:49.131 "memory_domains": [ 00:23:49.131 { 00:23:49.131 "dma_device_id": "system", 00:23:49.131 "dma_device_type": 1 00:23:49.131 } 00:23:49.131 ], 00:23:49.131 "driver_specific": { 00:23:49.131 "nvme": [ 00:23:49.131 { 00:23:49.131 "trid": { 00:23:49.131 "trtype": "TCP", 00:23:49.131 "adrfam": "IPv4", 00:23:49.131 "traddr": "10.0.0.2", 00:23:49.131 "trsvcid": "4421", 00:23:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:49.131 }, 00:23:49.131 "ctrlr_data": { 00:23:49.131 "cntlid": 3, 00:23:49.131 "vendor_id": "0x8086", 00:23:49.131 "model_number": "SPDK bdev Controller", 00:23:49.131 "serial_number": "00000000000000000000", 00:23:49.131 "firmware_revision": "24.09", 00:23:49.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:49.131 "oacs": { 00:23:49.131 "security": 0, 00:23:49.131 "format": 0, 00:23:49.132 "firmware": 0, 00:23:49.132 "ns_manage": 0 00:23:49.132 }, 00:23:49.132 "multi_ctrlr": true, 00:23:49.132 "ana_reporting": false 00:23:49.132 }, 00:23:49.132 "vs": { 00:23:49.132 "nvme_version": "1.3" 00:23:49.132 }, 00:23:49.132 "ns_data": { 00:23:49.132 "id": 1, 00:23:49.132 "can_share": true 00:23:49.132 } 00:23:49.132 } 00:23:49.132 ], 00:23:49.132 "mp_policy": "active_passive" 00:23:49.132 } 00:23:49.132 } 00:23:49.132 ] 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9Cs2IDGoR2 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.132 rmmod nvme_tcp 00:23:49.132 rmmod nvme_fabrics 00:23:49.132 rmmod nvme_keyring 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1673658 ']' 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1673658 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1673658 ']' 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1673658 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673658 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673658' 00:23:49.132 killing process with pid 1673658 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1673658 00:23:49.132 [2024-07-15 20:59:52.932281] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:49.132 [2024-07-15 20:59:52.932308] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.132 20:59:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1673658 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.392 20:59:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.303 20:59:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.303 00:23:51.303 real 0m11.124s 00:23:51.303 user 0m3.992s 00:23:51.303 sys 0m5.595s 00:23:51.303 20:59:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.303 20:59:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.303 ************************************ 00:23:51.303 END TEST nvmf_async_init 00:23:51.303 ************************************ 00:23:51.303 20:59:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:51.303 20:59:55 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:51.303 20:59:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.303 20:59:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.303 20:59:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.564 ************************************ 00:23:51.564 START TEST dma 00:23:51.564 ************************************ 00:23:51.564 20:59:55 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:51.564 * Looking for test storage... 00:23:51.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.564 20:59:55 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.564 20:59:55 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.564 20:59:55 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.564 20:59:55 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.564 20:59:55 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.564 20:59:55 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.564 20:59:55 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.564 20:59:55 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:51.564 20:59:55 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.564 20:59:55 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.564 20:59:55 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:51.564 20:59:55 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:51.564 00:23:51.564 real 0m0.129s 00:23:51.564 user 0m0.063s 00:23:51.564 sys 0m0.076s 00:23:51.564 20:59:55 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.564 20:59:55 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:51.564 ************************************ 00:23:51.564 END TEST dma 00:23:51.564 ************************************ 00:23:51.564 20:59:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:51.564 20:59:55 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:51.564 20:59:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.564 20:59:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.564 20:59:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.564 ************************************ 00:23:51.564 START TEST nvmf_identify 00:23:51.564 ************************************ 00:23:51.564 20:59:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:51.825 * Looking for test storage... 00:23:51.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.825 20:59:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.826 20:59:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:58.419 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:58.419 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:58.419 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:58.419 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.419 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.420 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.680 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:23:58.681 00:23:58.681 --- 10.0.0.2 ping statistics --- 00:23:58.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.681 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:23:58.681 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:23:58.681 00:23:58.681 --- 10.0.0.1 ping statistics --- 00:23:58.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.681 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:23:58.681 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.681 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:58.681 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.681 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1678204 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1678204 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1678204 ']' 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.941 21:00:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.941 [2024-07-15 21:00:02.681646] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:58.941 [2024-07-15 21:00:02.681712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.941 [2024-07-15 21:00:02.752353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.941 [2024-07-15 21:00:02.828269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.941 [2024-07-15 21:00:02.828305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.941 [2024-07-15 21:00:02.828313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.941 [2024-07-15 21:00:02.828319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.941 [2024-07-15 21:00:02.828325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.941 [2024-07-15 21:00:02.828468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.941 [2024-07-15 21:00:02.828582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.941 [2024-07-15 21:00:02.828740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.941 [2024-07-15 21:00:02.828741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 [2024-07-15 21:00:03.463660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 Malloc0 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 [2024-07-15 21:00:03.559169] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:59.893 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.894 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.894 [ 00:23:59.894 { 00:23:59.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:59.894 "subtype": "Discovery", 00:23:59.894 "listen_addresses": [ 00:23:59.894 { 00:23:59.894 "trtype": "TCP", 00:23:59.894 "adrfam": "IPv4", 00:23:59.894 "traddr": "10.0.0.2", 00:23:59.894 "trsvcid": "4420" 00:23:59.894 } 00:23:59.894 ], 00:23:59.894 "allow_any_host": true, 00:23:59.894 "hosts": [] 00:23:59.894 }, 00:23:59.894 { 00:23:59.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.894 "subtype": "NVMe", 00:23:59.894 "listen_addresses": [ 00:23:59.894 { 00:23:59.894 "trtype": "TCP", 00:23:59.894 "adrfam": "IPv4", 00:23:59.894 "traddr": "10.0.0.2", 00:23:59.894 "trsvcid": "4420" 00:23:59.894 } 00:23:59.894 ], 00:23:59.894 "allow_any_host": true, 00:23:59.894 "hosts": [], 00:23:59.894 "serial_number": "SPDK00000000000001", 00:23:59.894 "model_number": "SPDK bdev Controller", 00:23:59.894 "max_namespaces": 32, 00:23:59.894 "min_cntlid": 1, 00:23:59.894 "max_cntlid": 65519, 00:23:59.894 "namespaces": [ 00:23:59.894 { 00:23:59.894 "nsid": 1, 00:23:59.894 "bdev_name": "Malloc0", 00:23:59.894 "name": "Malloc0", 00:23:59.894 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:59.894 "eui64": "ABCDEF0123456789", 00:23:59.894 "uuid": "78096914-a8e3-4de4-b2d0-0a3747d1a159" 00:23:59.894 } 00:23:59.894 ] 00:23:59.894 } 00:23:59.894 ] 00:23:59.894 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.894 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:59.894 [2024-07-15 21:00:03.622030] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:23:59.894 [2024-07-15 21:00:03.622070] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678495 ] 00:23:59.894 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.894 [2024-07-15 21:00:03.655706] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:59.894 [2024-07-15 21:00:03.655755] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:59.894 [2024-07-15 21:00:03.655760] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:59.894 [2024-07-15 21:00:03.655774] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:59.894 [2024-07-15 21:00:03.655780] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:59.894 [2024-07-15 21:00:03.656112] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:59.894 [2024-07-15 21:00:03.656146] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a11ec0 0 00:23:59.894 [2024-07-15 21:00:03.666133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:59.894 [2024-07-15 21:00:03.666145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:59.894 [2024-07-15 21:00:03.666150] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:59.894 [2024-07-15 21:00:03.666153] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:59.894 [2024-07-15 21:00:03.666189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.666195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.666199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.894 [2024-07-15 21:00:03.666210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:59.894 [2024-07-15 21:00:03.666226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.894 [2024-07-15 21:00:03.674132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.894 [2024-07-15 21:00:03.674141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.894 [2024-07-15 21:00:03.674145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.894 [2024-07-15 21:00:03.674158] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:59.894 [2024-07-15 21:00:03.674165] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:59.894 [2024-07-15 21:00:03.674170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:59.894 [2024-07-15 21:00:03.674183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.894 [2024-07-15 21:00:03.674198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.894 [2024-07-15 21:00:03.674210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.894 [2024-07-15 21:00:03.674434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.894 [2024-07-15 21:00:03.674441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.894 [2024-07-15 21:00:03.674444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.894 [2024-07-15 21:00:03.674453] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:59.894 [2024-07-15 21:00:03.674460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:59.894 [2024-07-15 21:00:03.674466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.894 [2024-07-15 21:00:03.674484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.894 [2024-07-15 21:00:03.674495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.894 [2024-07-15 21:00:03.674717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.894 [2024-07-15 21:00:03.674723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.894 [2024-07-15 21:00:03.674727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.894 [2024-07-15 21:00:03.674735] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:59.894 [2024-07-15 21:00:03.674743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:59.894 [2024-07-15 21:00:03.674749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.894 [2024-07-15 21:00:03.674763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.894 [2024-07-15 21:00:03.674773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.894 [2024-07-15 21:00:03.674978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.894 [2024-07-15 21:00:03.674985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.894 [2024-07-15 21:00:03.674988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.894 [2024-07-15 21:00:03.674992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.894 [2024-07-15 21:00:03.674997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:59.895 [2024-07-15 21:00:03.675006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.675019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.895 [2024-07-15 21:00:03.675029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.895 [2024-07-15 21:00:03.675256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.895 [2024-07-15 21:00:03.675263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.895 [2024-07-15 21:00:03.675267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.895 [2024-07-15 21:00:03.675275] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:59.895 [2024-07-15 21:00:03.675280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:59.895 [2024-07-15 21:00:03.675287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:59.895 [2024-07-15 21:00:03.675392] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:59.895 [2024-07-15 21:00:03.675422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:59.895 [2024-07-15 21:00:03.675430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.675444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.895 [2024-07-15 21:00:03.675454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.895 [2024-07-15 21:00:03.675675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.895 [2024-07-15 21:00:03.675682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.895 [2024-07-15 21:00:03.675685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.895 [2024-07-15 21:00:03.675694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:59.895 [2024-07-15 21:00:03.675703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.675716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.895 [2024-07-15 21:00:03.675726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.895 [2024-07-15 21:00:03.675931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.895 [2024-07-15 21:00:03.675938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.895 [2024-07-15 21:00:03.675941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.895 [2024-07-15 21:00:03.675949] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:59.895 [2024-07-15 21:00:03.675954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:59.895 [2024-07-15 21:00:03.675961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:59.895 [2024-07-15 21:00:03.675974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:59.895 [2024-07-15 21:00:03.675982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.675986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.675993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.895 [2024-07-15 21:00:03.676003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.895 [2024-07-15 21:00:03.676231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.895 [2024-07-15 21:00:03.676238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.895 [2024-07-15 21:00:03.676242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676246] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a11ec0): datao=0, datal=4096, cccid=0 00:23:59.895 [2024-07-15 21:00:03.676250] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a94e40) on tqpair(0x1a11ec0): expected_datao=0, payload_size=4096 00:23:59.895 [2024-07-15 21:00:03.676258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676269] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.895 [2024-07-15 21:00:03.676407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.895 [2024-07-15 21:00:03.676411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.895 [2024-07-15 21:00:03.676422] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:59.895 [2024-07-15 21:00:03.676429] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:59.895 [2024-07-15 21:00:03.676433] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:59.895 [2024-07-15 21:00:03.676438] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:59.895 [2024-07-15 21:00:03.676442] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:59.895 [2024-07-15 21:00:03.676447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:59.895 [2024-07-15 21:00:03.676455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:59.895 [2024-07-15 21:00:03.676461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.676476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.895 [2024-07-15 21:00:03.676487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.895 [2024-07-15 21:00:03.676699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.895 [2024-07-15 21:00:03.676706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.895 [2024-07-15 21:00:03.676709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.895 [2024-07-15 21:00:03.676720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.676733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.895 [2024-07-15 21:00:03.676739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.676752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.895 [2024-07-15 21:00:03.676758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.676773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.895 [2024-07-15 21:00:03.676779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.895 [2024-07-15 21:00:03.676786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.895 [2024-07-15 21:00:03.676792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.895 [2024-07-15 21:00:03.676796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:59.896 [2024-07-15 21:00:03.676806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:59.896 [2024-07-15 21:00:03.676813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.676816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.676823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.896 [2024-07-15 21:00:03.676834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94e40, cid 0, qid 0 00:23:59.896 [2024-07-15 21:00:03.676840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a94fc0, cid 1, qid 0 00:23:59.896 [2024-07-15 21:00:03.676844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95140, cid 2, qid 0 00:23:59.896 [2024-07-15 21:00:03.676849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.896 [2024-07-15 21:00:03.676854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95440, cid 4, qid 0 00:23:59.896 [2024-07-15 21:00:03.677135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.677142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.677145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95440) on tqpair=0x1a11ec0 00:23:59.896 [2024-07-15 21:00:03.677154] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:59.896 [2024-07-15 21:00:03.677159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:59.896 [2024-07-15 21:00:03.677170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.677180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.896 [2024-07-15 21:00:03.677190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95440, cid 4, qid 0 00:23:59.896 [2024-07-15 21:00:03.677431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.896 [2024-07-15 21:00:03.677438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.896 [2024-07-15 21:00:03.677441] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677445] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a11ec0): datao=0, datal=4096, cccid=4 00:23:59.896 [2024-07-15 21:00:03.677449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95440) on tqpair(0x1a11ec0): expected_datao=0, payload_size=4096 00:23:59.896 [2024-07-15 21:00:03.677453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677460] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.677717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.677721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95440) on tqpair=0x1a11ec0 00:23:59.896 [2024-07-15 21:00:03.677736] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:59.896 [2024-07-15 21:00:03.677756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.677766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.896 [2024-07-15 21:00:03.677773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.677786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.896 [2024-07-15 21:00:03.677799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95440, cid 4, qid 0 00:23:59.896 [2024-07-15 21:00:03.677805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a955c0, cid 5, qid 0 00:23:59.896 [2024-07-15 21:00:03.677942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.896 [2024-07-15 21:00:03.677948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.896 [2024-07-15 21:00:03.677952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677955] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a11ec0): datao=0, datal=1024, cccid=4 00:23:59.896 [2024-07-15 21:00:03.677960] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95440) on tqpair(0x1a11ec0): expected_datao=0, payload_size=1024 00:23:59.896 [2024-07-15 21:00:03.677964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677970] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677974] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.677985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.677988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.677992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a955c0) on tqpair=0x1a11ec0 00:23:59.896 [2024-07-15 21:00:03.719324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.719336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.719340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95440) on tqpair=0x1a11ec0 00:23:59.896 [2024-07-15 21:00:03.719359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.719370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.896 [2024-07-15 21:00:03.719386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95440, cid 4, qid 0 00:23:59.896 [2024-07-15 21:00:03.719588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.896 [2024-07-15 21:00:03.719597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.896 [2024-07-15 21:00:03.719601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719605] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a11ec0): datao=0, datal=3072, cccid=4 00:23:59.896 [2024-07-15 21:00:03.719609] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95440) on tqpair(0x1a11ec0): expected_datao=0, payload_size=3072 00:23:59.896 [2024-07-15 21:00:03.719613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719728] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719732] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.719900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.719903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95440) on tqpair=0x1a11ec0 00:23:59.896 [2024-07-15 21:00:03.719915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.719919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a11ec0) 00:23:59.896 [2024-07-15 21:00:03.719925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.896 [2024-07-15 21:00:03.719939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95440, cid 4, qid 0 00:23:59.896 [2024-07-15 21:00:03.724130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.896 [2024-07-15 21:00:03.724137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.896 [2024-07-15 21:00:03.724141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.724144] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a11ec0): datao=0, datal=8, cccid=4 00:23:59.896 [2024-07-15 21:00:03.724149] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95440) on tqpair(0x1a11ec0): expected_datao=0, payload_size=8 00:23:59.896 [2024-07-15 21:00:03.724153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.724159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.724163] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.761325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.896 [2024-07-15 21:00:03.761335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.896 [2024-07-15 21:00:03.761338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.896 [2024-07-15 21:00:03.761342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95440) on tqpair=0x1a11ec0 00:23:59.896 ===================================================== 00:23:59.896 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:59.896 ===================================================== 00:23:59.897 Controller Capabilities/Features 00:23:59.897 ================================ 00:23:59.897 Vendor ID: 0000 00:23:59.897 Subsystem Vendor ID: 0000 00:23:59.897 Serial Number: .................... 00:23:59.897 Model Number: ........................................ 00:23:59.897 Firmware Version: 24.09 00:23:59.897 Recommended Arb Burst: 0 00:23:59.897 IEEE OUI Identifier: 00 00 00 00:23:59.897 Multi-path I/O 00:23:59.897 May have multiple subsystem ports: No 00:23:59.897 May have multiple controllers: No 00:23:59.897 Associated with SR-IOV VF: No 00:23:59.897 Max Data Transfer Size: 131072 00:23:59.897 Max Number of Namespaces: 0 00:23:59.897 Max Number of I/O Queues: 1024 00:23:59.897 NVMe Specification Version (VS): 1.3 00:23:59.897 NVMe Specification Version (Identify): 1.3 00:23:59.897 Maximum Queue Entries: 128 00:23:59.897 Contiguous Queues Required: Yes 00:23:59.897 Arbitration Mechanisms Supported 00:23:59.897 Weighted Round Robin: Not Supported 00:23:59.897 Vendor Specific: Not Supported 00:23:59.897 Reset Timeout: 15000 ms 00:23:59.897 Doorbell Stride: 4 bytes 00:23:59.897 NVM Subsystem Reset: Not Supported 00:23:59.897 Command Sets Supported 00:23:59.897 NVM Command Set: Supported 00:23:59.897 Boot Partition: Not Supported 00:23:59.897 Memory Page Size Minimum: 4096 bytes 00:23:59.897 Memory Page Size Maximum: 4096 bytes 00:23:59.897 Persistent Memory Region: Not Supported 00:23:59.897 Optional Asynchronous Events Supported 00:23:59.897 Namespace Attribute Notices: Not Supported 00:23:59.897 Firmware Activation Notices: Not Supported 00:23:59.897 ANA Change Notices: Not Supported 00:23:59.897 PLE Aggregate Log Change Notices: Not Supported 00:23:59.897 LBA Status Info Alert Notices: Not Supported 00:23:59.897 EGE Aggregate Log Change Notices: Not Supported 00:23:59.897 Normal NVM Subsystem Shutdown event: Not Supported 00:23:59.897 Zone Descriptor Change Notices: Not Supported 00:23:59.897 Discovery Log Change Notices: Supported 00:23:59.897 Controller Attributes 00:23:59.897 128-bit Host Identifier: Not Supported 00:23:59.897 Non-Operational Permissive Mode: Not Supported 00:23:59.897 NVM Sets: Not Supported 00:23:59.897 Read Recovery Levels: Not Supported 00:23:59.897 Endurance Groups: Not Supported 00:23:59.897 Predictable Latency Mode: Not Supported 00:23:59.897 Traffic Based Keep ALive: Not Supported 00:23:59.897 Namespace Granularity: Not Supported 00:23:59.897 SQ Associations: Not Supported 00:23:59.897 UUID List: Not Supported 00:23:59.897 Multi-Domain Subsystem: Not Supported 00:23:59.897 Fixed Capacity Management: Not Supported 00:23:59.897 Variable Capacity Management: Not Supported 00:23:59.897 Delete Endurance Group: Not Supported 00:23:59.897 Delete NVM Set: Not Supported 00:23:59.897 Extended LBA Formats Supported: Not Supported 00:23:59.897 Flexible Data Placement Supported: Not Supported 00:23:59.897 00:23:59.897 Controller Memory Buffer Support 00:23:59.897 ================================ 00:23:59.897 Supported: No 00:23:59.897 00:23:59.897 Persistent Memory Region Support 00:23:59.897 ================================ 00:23:59.897 Supported: No 00:23:59.897 00:23:59.897 Admin Command Set Attributes 00:23:59.897 ============================ 00:23:59.897 Security Send/Receive: Not Supported 00:23:59.897 Format NVM: Not Supported 00:23:59.897 Firmware Activate/Download: Not Supported 00:23:59.897 Namespace Management: Not Supported 00:23:59.897 Device Self-Test: Not Supported 00:23:59.897 Directives: Not Supported 00:23:59.897 NVMe-MI: Not Supported 00:23:59.897 Virtualization Management: Not Supported 00:23:59.897 Doorbell Buffer Config: Not Supported 00:23:59.897 Get LBA Status Capability: Not Supported 00:23:59.897 Command & Feature Lockdown Capability: Not Supported 00:23:59.897 Abort Command Limit: 1 00:23:59.897 Async Event Request Limit: 4 00:23:59.897 Number of Firmware Slots: N/A 00:23:59.897 Firmware Slot 1 Read-Only: N/A 00:23:59.897 Firmware Activation Without Reset: N/A 00:23:59.897 Multiple Update Detection Support: N/A 00:23:59.897 Firmware Update Granularity: No Information Provided 00:23:59.897 Per-Namespace SMART Log: No 00:23:59.897 Asymmetric Namespace Access Log Page: Not Supported 00:23:59.897 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:59.897 Command Effects Log Page: Not Supported 00:23:59.897 Get Log Page Extended Data: Supported 00:23:59.897 Telemetry Log Pages: Not Supported 00:23:59.897 Persistent Event Log Pages: Not Supported 00:23:59.897 Supported Log Pages Log Page: May Support 00:23:59.897 Commands Supported & Effects Log Page: Not Supported 00:23:59.897 Feature Identifiers & Effects Log Page:May Support 00:23:59.897 NVMe-MI Commands & Effects Log Page: May Support 00:23:59.897 Data Area 4 for Telemetry Log: Not Supported 00:23:59.897 Error Log Page Entries Supported: 128 00:23:59.897 Keep Alive: Not Supported 00:23:59.897 00:23:59.897 NVM Command Set Attributes 00:23:59.897 ========================== 00:23:59.897 Submission Queue Entry Size 00:23:59.897 Max: 1 00:23:59.897 Min: 1 00:23:59.897 Completion Queue Entry Size 00:23:59.897 Max: 1 00:23:59.897 Min: 1 00:23:59.897 Number of Namespaces: 0 00:23:59.897 Compare Command: Not Supported 00:23:59.897 Write Uncorrectable Command: Not Supported 00:23:59.897 Dataset Management Command: Not Supported 00:23:59.897 Write Zeroes Command: Not Supported 00:23:59.897 Set Features Save Field: Not Supported 00:23:59.897 Reservations: Not Supported 00:23:59.897 Timestamp: Not Supported 00:23:59.897 Copy: Not Supported 00:23:59.897 Volatile Write Cache: Not Present 00:23:59.897 Atomic Write Unit (Normal): 1 00:23:59.897 Atomic Write Unit (PFail): 1 00:23:59.897 Atomic Compare & Write Unit: 1 00:23:59.897 Fused Compare & Write: Supported 00:23:59.897 Scatter-Gather List 00:23:59.897 SGL Command Set: Supported 00:23:59.897 SGL Keyed: Supported 00:23:59.897 SGL Bit Bucket Descriptor: Not Supported 00:23:59.897 SGL Metadata Pointer: Not Supported 00:23:59.897 Oversized SGL: Not Supported 00:23:59.897 SGL Metadata Address: Not Supported 00:23:59.897 SGL Offset: Supported 00:23:59.897 Transport SGL Data Block: Not Supported 00:23:59.897 Replay Protected Memory Block: Not Supported 00:23:59.897 00:23:59.897 Firmware Slot Information 00:23:59.897 ========================= 00:23:59.897 Active slot: 0 00:23:59.897 00:23:59.897 00:23:59.897 Error Log 00:23:59.897 ========= 00:23:59.897 00:23:59.897 Active Namespaces 00:23:59.897 ================= 00:23:59.897 Discovery Log Page 00:23:59.897 ================== 00:23:59.897 Generation Counter: 2 00:23:59.897 Number of Records: 2 00:23:59.897 Record Format: 0 00:23:59.897 00:23:59.897 Discovery Log Entry 0 00:23:59.897 ---------------------- 00:23:59.897 Transport Type: 3 (TCP) 00:23:59.897 Address Family: 1 (IPv4) 00:23:59.898 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:59.898 Entry Flags: 00:23:59.898 Duplicate Returned Information: 1 00:23:59.898 Explicit Persistent Connection Support for Discovery: 1 00:23:59.898 Transport Requirements: 00:23:59.898 Secure Channel: Not Required 00:23:59.898 Port ID: 0 (0x0000) 00:23:59.898 Controller ID: 65535 (0xffff) 00:23:59.898 Admin Max SQ Size: 128 00:23:59.898 Transport Service Identifier: 4420 00:23:59.898 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:59.898 Transport Address: 10.0.0.2 00:23:59.898 Discovery Log Entry 1 00:23:59.898 ---------------------- 00:23:59.898 Transport Type: 3 (TCP) 00:23:59.898 Address Family: 1 (IPv4) 00:23:59.898 Subsystem Type: 2 (NVM Subsystem) 00:23:59.898 Entry Flags: 00:23:59.898 Duplicate Returned Information: 0 00:23:59.898 Explicit Persistent Connection Support for Discovery: 0 00:23:59.898 Transport Requirements: 00:23:59.898 Secure Channel: Not Required 00:23:59.898 Port ID: 0 (0x0000) 00:23:59.898 Controller ID: 65535 (0xffff) 00:23:59.898 Admin Max SQ Size: 128 00:23:59.898 Transport Service Identifier: 4420 00:23:59.898 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:59.898 Transport Address: 10.0.0.2 [2024-07-15 21:00:03.761430] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:59.898 [2024-07-15 21:00:03.761440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94e40) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.898 [2024-07-15 21:00:03.761452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a94fc0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.898 [2024-07-15 21:00:03.761461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95140) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.898 [2024-07-15 21:00:03.761470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.898 [2024-07-15 21:00:03.761486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.761501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.761515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.761623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.761630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.761633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.761657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.761670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.761903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.761910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.761913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.761922] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:59.898 [2024-07-15 21:00:03.761926] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:59.898 [2024-07-15 21:00:03.761935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.761942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.761949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.761959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.762160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.762167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.762170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.762184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.762198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.762208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.762439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.762448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.762452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.762465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.762479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.762489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.762701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.762707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.762711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.762724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.762738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.762747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.762965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.762971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.762975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.762988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.762995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.898 [2024-07-15 21:00:03.763001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-15 21:00:03.763011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.898 [2024-07-15 21:00:03.767131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.898 [2024-07-15 21:00:03.767139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.898 [2024-07-15 21:00:03.767143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.898 [2024-07-15 21:00:03.767147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.898 [2024-07-15 21:00:03.767156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.899 [2024-07-15 21:00:03.767160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.899 [2024-07-15 21:00:03.767164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a11ec0) 00:23:59.899 [2024-07-15 21:00:03.767170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-15 21:00:03.767182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a952c0, cid 3, qid 0 00:23:59.899 [2024-07-15 21:00:03.767391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.899 [2024-07-15 21:00:03.767397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.899 [2024-07-15 21:00:03.767403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.899 [2024-07-15 21:00:03.767407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a952c0) on tqpair=0x1a11ec0 00:23:59.899 [2024-07-15 21:00:03.767415] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:59.899 00:24:00.176 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:00.176 [2024-07-15 21:00:03.806851] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:24:00.176 [2024-07-15 21:00:03.806916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678552 ] 00:24:00.176 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.176 [2024-07-15 21:00:03.840656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:00.176 [2024-07-15 21:00:03.840700] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:00.176 [2024-07-15 21:00:03.840705] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:00.176 [2024-07-15 21:00:03.840715] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:00.176 [2024-07-15 21:00:03.840721] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:00.176 [2024-07-15 21:00:03.841055] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:00.176 [2024-07-15 21:00:03.841079] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dfaec0 0 00:24:00.176 [2024-07-15 21:00:03.847133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:00.176 [2024-07-15 21:00:03.847145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:00.176 [2024-07-15 21:00:03.847149] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:00.176 [2024-07-15 21:00:03.847152] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:00.176 [2024-07-15 21:00:03.847186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.847191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.847195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.847206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:00.176 [2024-07-15 21:00:03.847222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.855132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.855141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.855144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.855157] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:00.176 [2024-07-15 21:00:03.855163] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:00.176 [2024-07-15 21:00:03.855168] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:00.176 [2024-07-15 21:00:03.855180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.855198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.855211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.855304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.855311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.855314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.855323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:00.176 [2024-07-15 21:00:03.855330] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:00.176 [2024-07-15 21:00:03.855337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.855351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.855362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.855446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.855452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.855456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.855464] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:00.176 [2024-07-15 21:00:03.855472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.855478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.855486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.855492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.855502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.859128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.859136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.859140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.859148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.859158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.859172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.859187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.859272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.859278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.859281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.859290] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:00.176 [2024-07-15 21:00:03.859294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.859301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.859406] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:00.176 [2024-07-15 21:00:03.859410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.859418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.859431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.859442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.859523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.859530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.859533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.859542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:00.176 [2024-07-15 21:00:03.859550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.859564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.859574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.859658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.859664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.859668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.859675] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:00.176 [2024-07-15 21:00:03.859680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:00.176 [2024-07-15 21:00:03.859687] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:00.176 [2024-07-15 21:00:03.859697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:00.176 [2024-07-15 21:00:03.859706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.859716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.176 [2024-07-15 21:00:03.859726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.859838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.176 [2024-07-15 21:00:03.859844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.176 [2024-07-15 21:00:03.859848] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859851] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=4096, cccid=0 00:24:00.176 [2024-07-15 21:00:03.859856] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7de40) on tqpair(0x1dfaec0): expected_datao=0, payload_size=4096 00:24:00.176 [2024-07-15 21:00:03.859860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859871] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.859924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.176 [2024-07-15 21:00:03.859928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.176 [2024-07-15 21:00:03.859938] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:00.176 [2024-07-15 21:00:03.859945] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:00.176 [2024-07-15 21:00:03.859950] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:00.176 [2024-07-15 21:00:03.859954] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:00.176 [2024-07-15 21:00:03.859958] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:00.176 [2024-07-15 21:00:03.859963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:00.176 [2024-07-15 21:00:03.859971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:00.176 [2024-07-15 21:00:03.859977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.176 [2024-07-15 21:00:03.859984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.176 [2024-07-15 21:00:03.859991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.176 [2024-07-15 21:00:03.860002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.176 [2024-07-15 21:00:03.860090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.176 [2024-07-15 21:00:03.860096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.860100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.860110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.177 [2024-07-15 21:00:03.860139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.177 [2024-07-15 21:00:03.860157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.177 [2024-07-15 21:00:03.860176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.177 [2024-07-15 21:00:03.860193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.860232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7de40, cid 0, qid 0 00:24:00.177 [2024-07-15 21:00:03.860237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7dfc0, cid 1, qid 0 00:24:00.177 [2024-07-15 21:00:03.860242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e140, cid 2, qid 0 00:24:00.177 [2024-07-15 21:00:03.860246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.177 [2024-07-15 21:00:03.860251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.860364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.860371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.860374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.860383] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:00.177 [2024-07-15 21:00:03.860387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.177 [2024-07-15 21:00:03.860432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.860517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.860523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.860526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.860594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.860631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.860727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.177 [2024-07-15 21:00:03.860734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.177 [2024-07-15 21:00:03.860737] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860741] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=4096, cccid=4 00:24:00.177 [2024-07-15 21:00:03.860745] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e440) on tqpair(0x1dfaec0): expected_datao=0, payload_size=4096 00:24:00.177 [2024-07-15 21:00:03.860749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860756] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860760] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.860839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.860843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.860854] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:00.177 [2024-07-15 21:00:03.860867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.860882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.860886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.860892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.860905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.860999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.177 [2024-07-15 21:00:03.861006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.177 [2024-07-15 21:00:03.861009] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861013] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=4096, cccid=4 00:24:00.177 [2024-07-15 21:00:03.861017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e440) on tqpair(0x1dfaec0): expected_datao=0, payload_size=4096 00:24:00.177 [2024-07-15 21:00:03.861021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861028] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.861194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.861286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.177 [2024-07-15 21:00:03.861292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.177 [2024-07-15 21:00:03.861295] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861299] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=4096, cccid=4 00:24:00.177 [2024-07-15 21:00:03.861303] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e440) on tqpair(0x1dfaec0): expected_datao=0, payload_size=4096 00:24:00.177 [2024-07-15 21:00:03.861307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861314] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861451] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:00.177 [2024-07-15 21:00:03.861455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:00.177 [2024-07-15 21:00:03.861460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:00.177 [2024-07-15 21:00:03.861474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.861492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.177 [2024-07-15 21:00:03.861518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.177 [2024-07-15 21:00:03.861523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e5c0, cid 5, qid 0 00:24:00.177 [2024-07-15 21:00:03.861622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e5c0) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.861684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e5c0, cid 5, qid 0 00:24:00.177 [2024-07-15 21:00:03.861775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e5c0) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.861828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e5c0, cid 5, qid 0 00:24:00.177 [2024-07-15 21:00:03.861913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.861919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.861922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e5c0) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.861935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.861938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.861945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.861954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e5c0, cid 5, qid 0 00:24:00.177 [2024-07-15 21:00:03.862040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.177 [2024-07-15 21:00:03.862046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.177 [2024-07-15 21:00:03.862049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.862053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e5c0) on tqpair=0x1dfaec0 00:24:00.177 [2024-07-15 21:00:03.862067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.862071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.862077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.177 [2024-07-15 21:00:03.862084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.177 [2024-07-15 21:00:03.862088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfaec0) 00:24:00.177 [2024-07-15 21:00:03.862094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.178 [2024-07-15 21:00:03.862101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1dfaec0) 00:24:00.178 [2024-07-15 21:00:03.862111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.178 [2024-07-15 21:00:03.862118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1dfaec0) 00:24:00.178 [2024-07-15 21:00:03.862135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.178 [2024-07-15 21:00:03.862147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e5c0, cid 5, qid 0 00:24:00.178 [2024-07-15 21:00:03.862152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e440, cid 4, qid 0 00:24:00.178 [2024-07-15 21:00:03.862156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e740, cid 6, qid 0 00:24:00.178 [2024-07-15 21:00:03.862161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e8c0, cid 7, qid 0 00:24:00.178 [2024-07-15 21:00:03.862306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.178 [2024-07-15 21:00:03.862312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.178 [2024-07-15 21:00:03.862316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862319] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=8192, cccid=5 00:24:00.178 [2024-07-15 21:00:03.862326] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e5c0) on tqpair(0x1dfaec0): expected_datao=0, payload_size=8192 00:24:00.178 [2024-07-15 21:00:03.862330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862595] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.178 [2024-07-15 21:00:03.862607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.178 [2024-07-15 21:00:03.862610] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862613] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=512, cccid=4 00:24:00.178 [2024-07-15 21:00:03.862618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e440) on tqpair(0x1dfaec0): expected_datao=0, payload_size=512 00:24:00.178 [2024-07-15 21:00:03.862622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862628] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.178 [2024-07-15 21:00:03.862643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.178 [2024-07-15 21:00:03.862646] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862649] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=512, cccid=6 00:24:00.178 [2024-07-15 21:00:03.862654] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e740) on tqpair(0x1dfaec0): expected_datao=0, payload_size=512 00:24:00.178 [2024-07-15 21:00:03.862658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862664] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862667] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:00.178 [2024-07-15 21:00:03.862679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:00.178 [2024-07-15 21:00:03.862682] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862685] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfaec0): datao=0, datal=4096, cccid=7 00:24:00.178 [2024-07-15 21:00:03.862689] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e7e8c0) on tqpair(0x1dfaec0): expected_datao=0, payload_size=4096 00:24:00.178 [2024-07-15 21:00:03.862694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862700] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862704] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.178 [2024-07-15 21:00:03.862754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.178 [2024-07-15 21:00:03.862757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e5c0) on tqpair=0x1dfaec0 00:24:00.178 [2024-07-15 21:00:03.862773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.178 [2024-07-15 21:00:03.862779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.178 [2024-07-15 21:00:03.862782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e440) on tqpair=0x1dfaec0 00:24:00.178 [2024-07-15 21:00:03.862796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.178 [2024-07-15 21:00:03.862802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.178 [2024-07-15 21:00:03.862807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e740) on tqpair=0x1dfaec0 00:24:00.178 [2024-07-15 21:00:03.862817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.178 [2024-07-15 21:00:03.862823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.178 [2024-07-15 21:00:03.862826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.178 [2024-07-15 21:00:03.862830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e8c0) on tqpair=0x1dfaec0 00:24:00.178 ===================================================== 00:24:00.178 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.178 ===================================================== 00:24:00.178 Controller Capabilities/Features 00:24:00.178 ================================ 00:24:00.178 Vendor ID: 8086 00:24:00.178 Subsystem Vendor ID: 8086 00:24:00.178 Serial Number: SPDK00000000000001 00:24:00.178 Model Number: SPDK bdev Controller 00:24:00.178 Firmware Version: 24.09 00:24:00.178 Recommended Arb Burst: 6 00:24:00.178 IEEE OUI Identifier: e4 d2 5c 00:24:00.178 Multi-path I/O 00:24:00.178 May have multiple subsystem ports: Yes 00:24:00.178 May have multiple controllers: Yes 00:24:00.178 Associated with SR-IOV VF: No 00:24:00.178 Max Data Transfer Size: 131072 00:24:00.178 Max Number of Namespaces: 32 00:24:00.178 Max Number of I/O Queues: 127 00:24:00.178 NVMe Specification Version (VS): 1.3 00:24:00.178 NVMe Specification Version (Identify): 1.3 00:24:00.178 Maximum Queue Entries: 128 00:24:00.178 Contiguous Queues Required: Yes 00:24:00.178 Arbitration Mechanisms Supported 00:24:00.178 Weighted Round Robin: Not Supported 00:24:00.178 Vendor Specific: Not Supported 00:24:00.178 Reset Timeout: 15000 ms 00:24:00.178 Doorbell Stride: 4 bytes 00:24:00.178 NVM Subsystem Reset: Not Supported 00:24:00.178 Command Sets Supported 00:24:00.178 NVM Command Set: Supported 00:24:00.178 Boot Partition: Not Supported 00:24:00.178 Memory Page Size Minimum: 4096 bytes 00:24:00.178 Memory Page Size Maximum: 4096 bytes 00:24:00.178 Persistent Memory Region: Not Supported 00:24:00.178 Optional Asynchronous Events Supported 00:24:00.178 Namespace Attribute Notices: Supported 00:24:00.178 Firmware Activation Notices: Not Supported 00:24:00.178 ANA Change Notices: Not Supported 00:24:00.178 PLE Aggregate Log Change Notices: Not Supported 00:24:00.178 LBA Status Info Alert Notices: Not Supported 00:24:00.178 EGE Aggregate Log Change Notices: Not Supported 00:24:00.178 Normal NVM Subsystem Shutdown event: Not Supported 00:24:00.178 Zone Descriptor Change Notices: Not Supported 00:24:00.178 Discovery Log Change Notices: Not Supported 00:24:00.178 Controller Attributes 00:24:00.178 128-bit Host Identifier: Supported 00:24:00.178 Non-Operational Permissive Mode: Not Supported 00:24:00.178 NVM Sets: Not Supported 00:24:00.178 Read Recovery Levels: Not Supported 00:24:00.178 Endurance Groups: Not Supported 00:24:00.178 Predictable Latency Mode: Not Supported 00:24:00.178 Traffic Based Keep ALive: Not Supported 00:24:00.178 Namespace Granularity: Not Supported 00:24:00.178 SQ Associations: Not Supported 00:24:00.178 UUID List: Not Supported 00:24:00.178 Multi-Domain Subsystem: Not Supported 00:24:00.178 Fixed Capacity Management: Not Supported 00:24:00.178 Variable Capacity Management: Not Supported 00:24:00.178 Delete Endurance Group: Not Supported 00:24:00.178 Delete NVM Set: Not Supported 00:24:00.178 Extended LBA Formats Supported: Not Supported 00:24:00.178 Flexible Data Placement Supported: Not Supported 00:24:00.178 00:24:00.178 Controller Memory Buffer Support 00:24:00.178 ================================ 00:24:00.178 Supported: No 00:24:00.178 00:24:00.178 Persistent Memory Region Support 00:24:00.178 ================================ 00:24:00.178 Supported: No 00:24:00.178 00:24:00.178 Admin Command Set Attributes 00:24:00.178 ============================ 00:24:00.178 Security Send/Receive: Not Supported 00:24:00.178 Format NVM: Not Supported 00:24:00.178 Firmware Activate/Download: Not Supported 00:24:00.178 Namespace Management: Not Supported 00:24:00.178 Device Self-Test: Not Supported 00:24:00.178 Directives: Not Supported 00:24:00.178 NVMe-MI: Not Supported 00:24:00.178 Virtualization Management: Not Supported 00:24:00.178 Doorbell Buffer Config: Not Supported 00:24:00.178 Get LBA Status Capability: Not Supported 00:24:00.178 Command & Feature Lockdown Capability: Not Supported 00:24:00.178 Abort Command Limit: 4 00:24:00.178 Async Event Request Limit: 4 00:24:00.178 Number of Firmware Slots: N/A 00:24:00.178 Firmware Slot 1 Read-Only: N/A 00:24:00.178 Firmware Activation Without Reset: N/A 00:24:00.178 Multiple Update Detection Support: N/A 00:24:00.178 Firmware Update Granularity: No Information Provided 00:24:00.178 Per-Namespace SMART Log: No 00:24:00.178 Asymmetric Namespace Access Log Page: Not Supported 00:24:00.178 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:00.178 Command Effects Log Page: Supported 00:24:00.178 Get Log Page Extended Data: Supported 00:24:00.178 Telemetry Log Pages: Not Supported 00:24:00.178 Persistent Event Log Pages: Not Supported 00:24:00.178 Supported Log Pages Log Page: May Support 00:24:00.178 Commands Supported & Effects Log Page: Not Supported 00:24:00.178 Feature Identifiers & Effects Log Page:May Support 00:24:00.178 NVMe-MI Commands & Effects Log Page: May Support 00:24:00.178 Data Area 4 for Telemetry Log: Not Supported 00:24:00.178 Error Log Page Entries Supported: 128 00:24:00.178 Keep Alive: Supported 00:24:00.178 Keep Alive Granularity: 10000 ms 00:24:00.178 00:24:00.178 NVM Command Set Attributes 00:24:00.178 ========================== 00:24:00.178 Submission Queue Entry Size 00:24:00.178 Max: 64 00:24:00.178 Min: 64 00:24:00.178 Completion Queue Entry Size 00:24:00.178 Max: 16 00:24:00.178 Min: 16 00:24:00.178 Number of Namespaces: 32 00:24:00.178 Compare Command: Supported 00:24:00.178 Write Uncorrectable Command: Not Supported 00:24:00.178 Dataset Management Command: Supported 00:24:00.178 Write Zeroes Command: Supported 00:24:00.178 Set Features Save Field: Not Supported 00:24:00.178 Reservations: Supported 00:24:00.178 Timestamp: Not Supported 00:24:00.178 Copy: Supported 00:24:00.178 Volatile Write Cache: Present 00:24:00.178 Atomic Write Unit (Normal): 1 00:24:00.178 Atomic Write Unit (PFail): 1 00:24:00.178 Atomic Compare & Write Unit: 1 00:24:00.178 Fused Compare & Write: Supported 00:24:00.178 Scatter-Gather List 00:24:00.178 SGL Command Set: Supported 00:24:00.178 SGL Keyed: Supported 00:24:00.178 SGL Bit Bucket Descriptor: Not Supported 00:24:00.178 SGL Metadata Pointer: Not Supported 00:24:00.178 Oversized SGL: Not Supported 00:24:00.178 SGL Metadata Address: Not Supported 00:24:00.178 SGL Offset: Supported 00:24:00.178 Transport SGL Data Block: Not Supported 00:24:00.178 Replay Protected Memory Block: Not Supported 00:24:00.178 00:24:00.178 Firmware Slot Information 00:24:00.178 ========================= 00:24:00.178 Active slot: 1 00:24:00.178 Slot 1 Firmware Revision: 24.09 00:24:00.178 00:24:00.178 00:24:00.178 Commands Supported and Effects 00:24:00.178 ============================== 00:24:00.178 Admin Commands 00:24:00.178 -------------- 00:24:00.178 Get Log Page (02h): Supported 00:24:00.178 Identify (06h): Supported 00:24:00.178 Abort (08h): Supported 00:24:00.178 Set Features (09h): Supported 00:24:00.178 Get Features (0Ah): Supported 00:24:00.178 Asynchronous Event Request (0Ch): Supported 00:24:00.178 Keep Alive (18h): Supported 00:24:00.178 I/O Commands 00:24:00.178 ------------ 00:24:00.178 Flush (00h): Supported LBA-Change 00:24:00.178 Write (01h): Supported LBA-Change 00:24:00.178 Read (02h): Supported 00:24:00.178 Compare (05h): Supported 00:24:00.178 Write Zeroes (08h): Supported LBA-Change 00:24:00.178 Dataset Management (09h): Supported LBA-Change 00:24:00.178 Copy (19h): Supported LBA-Change 00:24:00.178 00:24:00.178 Error Log 00:24:00.178 ========= 00:24:00.178 00:24:00.178 Arbitration 00:24:00.178 =========== 00:24:00.178 Arbitration Burst: 1 00:24:00.178 00:24:00.178 Power Management 00:24:00.178 ================ 00:24:00.178 Number of Power States: 1 00:24:00.178 Current Power State: Power State #0 00:24:00.178 Power State #0: 00:24:00.178 Max Power: 0.00 W 00:24:00.178 Non-Operational State: Operational 00:24:00.178 Entry Latency: Not Reported 00:24:00.178 Exit Latency: Not Reported 00:24:00.178 Relative Read Throughput: 0 00:24:00.178 Relative Read Latency: 0 00:24:00.178 Relative Write Throughput: 0 00:24:00.178 Relative Write Latency: 0 00:24:00.178 Idle Power: Not Reported 00:24:00.178 Active Power: Not Reported 00:24:00.178 Non-Operational Permissive Mode: Not Supported 00:24:00.178 00:24:00.178 Health Information 00:24:00.178 ================== 00:24:00.178 Critical Warnings: 00:24:00.178 Available Spare Space: OK 00:24:00.178 Temperature: OK 00:24:00.178 Device Reliability: OK 00:24:00.178 Read Only: No 00:24:00.178 Volatile Memory Backup: OK 00:24:00.178 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:00.179 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:00.179 Available Spare: 0% 00:24:00.179 Available Spare Threshold: 0% 00:24:00.179 Life Percentage Used:[2024-07-15 21:00:03.862928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.862933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.862940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.862953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e8c0, cid 7, qid 0 00:24:00.179 [2024-07-15 21:00:03.863045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e8c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863089] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:00.179 [2024-07-15 21:00:03.863098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7de40) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.179 [2024-07-15 21:00:03.863109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7dfc0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.179 [2024-07-15 21:00:03.863119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e140) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.179 [2024-07-15 21:00:03.863134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.179 [2024-07-15 21:00:03.863146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863418] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:00.179 [2024-07-15 21:00:03.863423] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:00.179 [2024-07-15 21:00:03.863432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.863918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.863924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.863928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.863941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.863948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.863955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.863964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.179 [2024-07-15 21:00:03.864724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.179 [2024-07-15 21:00:03.864733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.179 [2024-07-15 21:00:03.864816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.179 [2024-07-15 21:00:03.864822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.179 [2024-07-15 21:00:03.864825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.179 [2024-07-15 21:00:03.864838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.179 [2024-07-15 21:00:03.864842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.864845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.864854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.864864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.864949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.864955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.864958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.864962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.864971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.864975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.864979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.864985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.864995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.865877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.865891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.865901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.865979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.865985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.865989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.865992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.866002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.866006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.866009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.866016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.866025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.866105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.866111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.866114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.866118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.870134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.870140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.870144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfaec0) 00:24:00.180 [2024-07-15 21:00:03.870151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.180 [2024-07-15 21:00:03.870163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e7e2c0, cid 3, qid 0 00:24:00.180 [2024-07-15 21:00:03.870251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:00.180 [2024-07-15 21:00:03.870257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:00.180 [2024-07-15 21:00:03.870260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:00.180 [2024-07-15 21:00:03.870264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e7e2c0) on tqpair=0x1dfaec0 00:24:00.180 [2024-07-15 21:00:03.870271] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:00.180 0% 00:24:00.180 Data Units Read: 0 00:24:00.180 Data Units Written: 0 00:24:00.180 Host Read Commands: 0 00:24:00.180 Host Write Commands: 0 00:24:00.180 Controller Busy Time: 0 minutes 00:24:00.180 Power Cycles: 0 00:24:00.180 Power On Hours: 0 hours 00:24:00.180 Unsafe Shutdowns: 0 00:24:00.180 Unrecoverable Media Errors: 0 00:24:00.180 Lifetime Error Log Entries: 0 00:24:00.180 Warning Temperature Time: 0 minutes 00:24:00.180 Critical Temperature Time: 0 minutes 00:24:00.180 00:24:00.180 Number of Queues 00:24:00.180 ================ 00:24:00.180 Number of I/O Submission Queues: 127 00:24:00.180 Number of I/O Completion Queues: 127 00:24:00.180 00:24:00.180 Active Namespaces 00:24:00.180 ================= 00:24:00.180 Namespace ID:1 00:24:00.180 Error Recovery Timeout: Unlimited 00:24:00.180 Command Set Identifier: NVM (00h) 00:24:00.180 Deallocate: Supported 00:24:00.180 Deallocated/Unwritten Error: Not Supported 00:24:00.180 Deallocated Read Value: Unknown 00:24:00.180 Deallocate in Write Zeroes: Not Supported 00:24:00.180 Deallocated Guard Field: 0xFFFF 00:24:00.180 Flush: Supported 00:24:00.180 Reservation: Supported 00:24:00.180 Namespace Sharing Capabilities: Multiple Controllers 00:24:00.180 Size (in LBAs): 131072 (0GiB) 00:24:00.180 Capacity (in LBAs): 131072 (0GiB) 00:24:00.180 Utilization (in LBAs): 131072 (0GiB) 00:24:00.180 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:00.180 EUI64: ABCDEF0123456789 00:24:00.180 UUID: 78096914-a8e3-4de4-b2d0-0a3747d1a159 00:24:00.180 Thin Provisioning: Not Supported 00:24:00.180 Per-NS Atomic Units: Yes 00:24:00.180 Atomic Boundary Size (Normal): 0 00:24:00.180 Atomic Boundary Size (PFail): 0 00:24:00.180 Atomic Boundary Offset: 0 00:24:00.180 Maximum Single Source Range Length: 65535 00:24:00.180 Maximum Copy Length: 65535 00:24:00.180 Maximum Source Range Count: 1 00:24:00.180 NGUID/EUI64 Never Reused: No 00:24:00.180 Namespace Write Protected: No 00:24:00.180 Number of LBA Formats: 1 00:24:00.180 Current LBA Format: LBA Format #00 00:24:00.180 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:00.180 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.180 rmmod nvme_tcp 00:24:00.180 rmmod nvme_fabrics 00:24:00.180 rmmod nvme_keyring 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1678204 ']' 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1678204 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1678204 ']' 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1678204 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.180 21:00:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1678204 00:24:00.180 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.180 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.180 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1678204' 00:24:00.180 killing process with pid 1678204 00:24:00.180 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1678204 00:24:00.180 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1678204 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.440 21:00:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.352 21:00:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.352 00:24:02.352 real 0m10.828s 00:24:02.352 user 0m7.458s 00:24:02.352 sys 0m5.608s 00:24:02.352 21:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.352 21:00:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.352 ************************************ 00:24:02.352 END TEST nvmf_identify 00:24:02.352 ************************************ 00:24:02.613 21:00:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.613 21:00:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:02.613 21:00:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.613 21:00:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.613 21:00:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.613 ************************************ 00:24:02.613 START TEST nvmf_perf 00:24:02.613 ************************************ 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:02.613 * Looking for test storage... 00:24:02.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.613 21:00:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:10.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:10.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:10.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:10.753 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:24:10.753 00:24:10.753 --- 10.0.0.2 ping statistics --- 00:24:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.753 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:24:10.753 00:24:10.753 --- 10.0.0.1 ping statistics --- 00:24:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.753 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1683008 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1683008 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1683008 ']' 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.753 21:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.753 [2024-07-15 21:00:13.581595] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:24:10.753 [2024-07-15 21:00:13.581658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.753 [2024-07-15 21:00:13.651763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.753 [2024-07-15 21:00:13.726111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.753 [2024-07-15 21:00:13.726152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.753 [2024-07-15 21:00:13.726159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.753 [2024-07-15 21:00:13.726166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.753 [2024-07-15 21:00:13.726171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.753 [2024-07-15 21:00:13.726351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.753 [2024-07-15 21:00:13.726470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.753 [2024-07-15 21:00:13.726629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.753 [2024-07-15 21:00:13.726630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:10.753 21:00:14 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:11.014 21:00:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:11.014 21:00:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:11.273 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:11.273 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:11.534 [2024-07-15 21:00:15.377464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.534 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.794 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:11.794 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.056 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:12.056 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:12.056 21:00:15 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.318 [2024-07-15 21:00:16.064016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.318 21:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:12.605 21:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:12.605 21:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:12.605 21:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:12.605 21:00:16 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:14.046 Initializing NVMe Controllers 00:24:14.046 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:14.046 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:14.046 Initialization complete. Launching workers. 00:24:14.046 ======================================================== 00:24:14.046 Latency(us) 00:24:14.046 Device Information : IOPS MiB/s Average min max 00:24:14.046 PCIE (0000:65:00.0) NSID 1 from core 0: 79067.48 308.86 403.99 71.94 6289.82 00:24:14.046 ======================================================== 00:24:14.046 Total : 79067.48 308.86 403.99 71.94 6289.82 00:24:14.046 00:24:14.046 21:00:17 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.046 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.430 Initializing NVMe Controllers 00:24:15.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.430 Initialization complete. Launching workers. 00:24:15.430 ======================================================== 00:24:15.430 Latency(us) 00:24:15.430 Device Information : IOPS MiB/s Average min max 00:24:15.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 196.68 0.77 5175.66 125.79 45827.60 00:24:15.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 38.94 0.15 26500.92 7963.26 52856.79 00:24:15.430 ======================================================== 00:24:15.430 Total : 235.62 0.92 8699.75 125.79 52856.79 00:24:15.430 00:24:15.430 21:00:18 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.430 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.372 Initializing NVMe Controllers 00:24:16.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:16.372 Initialization complete. Launching workers. 00:24:16.372 ======================================================== 00:24:16.372 Latency(us) 00:24:16.372 Device Information : IOPS MiB/s Average min max 00:24:16.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9545.64 37.29 3361.21 565.83 8010.41 00:24:16.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3841.85 15.01 8380.11 5955.82 17225.50 00:24:16.372 ======================================================== 00:24:16.372 Total : 13387.49 52.29 4801.50 565.83 17225.50 00:24:16.372 00:24:16.372 21:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:16.372 21:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:16.373 21:00:20 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.633 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.175 Initializing NVMe Controllers 00:24:19.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.175 Controller IO queue size 128, less than required. 00:24:19.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.175 Controller IO queue size 128, less than required. 00:24:19.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.175 Initialization complete. Launching workers. 00:24:19.175 ======================================================== 00:24:19.175 Latency(us) 00:24:19.175 Device Information : IOPS MiB/s Average min max 00:24:19.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 848.96 212.24 154752.93 80369.08 213567.05 00:24:19.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.13 146.78 229398.18 72016.83 323274.36 00:24:19.175 ======================================================== 00:24:19.175 Total : 1436.09 359.02 185270.73 72016.83 323274.36 00:24:19.175 00:24:19.175 21:00:22 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:19.175 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.435 No valid NVMe controllers or AIO or URING devices found 00:24:19.435 Initializing NVMe Controllers 00:24:19.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.435 Controller IO queue size 128, less than required. 00:24:19.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.435 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:19.435 Controller IO queue size 128, less than required. 00:24:19.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.435 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:19.435 WARNING: Some requested NVMe devices were skipped 00:24:19.435 21:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:19.435 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.975 Initializing NVMe Controllers 00:24:21.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.975 Controller IO queue size 128, less than required. 00:24:21.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.975 Controller IO queue size 128, less than required. 00:24:21.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:21.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:21.975 Initialization complete. Launching workers. 00:24:21.975 00:24:21.975 ==================== 00:24:21.976 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:21.976 TCP transport: 00:24:21.976 polls: 37235 00:24:21.976 idle_polls: 11203 00:24:21.976 sock_completions: 26032 00:24:21.976 nvme_completions: 4035 00:24:21.976 submitted_requests: 6076 00:24:21.976 queued_requests: 1 00:24:21.976 00:24:21.976 ==================== 00:24:21.976 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:21.976 TCP transport: 00:24:21.976 polls: 43844 00:24:21.976 idle_polls: 17198 00:24:21.976 sock_completions: 26646 00:24:21.976 nvme_completions: 3855 00:24:21.976 submitted_requests: 5820 00:24:21.976 queued_requests: 1 00:24:21.976 ======================================================== 00:24:21.976 Latency(us) 00:24:21.976 Device Information : IOPS MiB/s Average min max 00:24:21.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1008.26 252.06 129588.09 67450.19 200256.36 00:24:21.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 963.27 240.82 136973.13 63119.66 216929.82 00:24:21.976 ======================================================== 00:24:21.976 Total : 1971.53 492.88 133196.35 63119.66 216929.82 00:24:21.976 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.976 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.976 rmmod nvme_tcp 00:24:21.976 rmmod nvme_fabrics 00:24:22.235 rmmod nvme_keyring 00:24:22.235 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1683008 ']' 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1683008 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1683008 ']' 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1683008 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1683008 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1683008' 00:24:22.236 killing process with pid 1683008 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1683008 00:24:22.236 21:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1683008 00:24:24.145 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.145 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.146 21:00:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.690 21:00:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.690 00:24:26.690 real 0m23.675s 00:24:26.690 user 0m58.462s 00:24:26.690 sys 0m7.531s 00:24:26.690 21:00:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.690 21:00:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:26.690 ************************************ 00:24:26.690 END TEST nvmf_perf 00:24:26.690 ************************************ 00:24:26.690 21:00:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:26.690 21:00:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:26.690 21:00:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:26.690 21:00:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.690 21:00:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:26.690 ************************************ 00:24:26.690 START TEST nvmf_fio_host 00:24:26.690 ************************************ 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:26.690 * Looking for test storage... 00:24:26.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.690 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.691 21:00:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:33.280 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:33.280 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:33.280 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:33.280 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.280 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.281 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:24:33.542 00:24:33.542 --- 10.0.0.2 ping statistics --- 00:24:33.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.542 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:24:33.542 00:24:33.542 --- 10.0.0.1 ping statistics --- 00:24:33.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.542 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.542 21:00:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1690062 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1690062 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1690062 ']' 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.802 21:00:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.802 [2024-07-15 21:00:37.530109] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:24:33.802 [2024-07-15 21:00:37.530207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.802 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.802 [2024-07-15 21:00:37.601476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.802 [2024-07-15 21:00:37.676373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.802 [2024-07-15 21:00:37.676410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.802 [2024-07-15 21:00:37.676418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.802 [2024-07-15 21:00:37.676424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.802 [2024-07-15 21:00:37.676430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.802 [2024-07-15 21:00:37.676575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.802 [2024-07-15 21:00:37.676700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.802 [2024-07-15 21:00:37.676860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.802 [2024-07-15 21:00:37.676861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:34.740 [2024-07-15 21:00:38.426008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 21:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:35.001 Malloc1 00:24:35.001 21:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.001 21:00:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:35.265 21:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.527 [2024-07-15 21:00:39.163511] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.527 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:35.528 21:00:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:36.123 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:36.123 fio-3.35 00:24:36.123 Starting 1 thread 00:24:36.123 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.668 00:24:38.668 test: (groupid=0, jobs=1): err= 0: pid=1690599: Mon Jul 15 21:00:42 2024 00:24:38.668 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2003msec) 00:24:38.668 slat (usec): min=2, max=295, avg= 2.19, stdev= 2.46 00:24:38.668 clat (usec): min=2632, max=11122, avg=5333.02, stdev=896.46 00:24:38.668 lat (usec): min=2634, max=11129, avg=5335.22, stdev=896.58 00:24:38.668 clat percentiles (usec): 00:24:38.668 | 1.00th=[ 3851], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4752], 00:24:38.668 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:24:38.669 | 70.00th=[ 5473], 80.00th=[ 5735], 90.00th=[ 6390], 95.00th=[ 7242], 00:24:38.669 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[10159], 99.95th=[10552], 00:24:38.669 | 99.99th=[10945] 00:24:38.669 bw ( KiB/s): min=53080, max=55464, per=99.86%, avg=54780.00, stdev=1143.85, samples=4 00:24:38.669 iops : min=13270, max=13866, avg=13695.00, stdev=285.96, samples=4 00:24:38.669 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2003msec); 0 zone resets 00:24:38.669 slat (usec): min=2, max=286, avg= 2.29, stdev= 1.89 00:24:38.669 clat (usec): min=1886, max=6862, avg=3951.02, stdev=521.77 00:24:38.669 lat (usec): min=1888, max=6867, avg=3953.31, stdev=521.90 00:24:38.669 clat percentiles (usec): 00:24:38.669 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3261], 20.00th=[ 3556], 00:24:38.669 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080], 00:24:38.669 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4752], 00:24:38.669 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 6259], 99.95th=[ 6521], 00:24:38.669 | 99.99th=[ 6718] 00:24:38.669 bw ( KiB/s): min=53512, max=55296, per=99.98%, avg=54750.00, stdev=833.27, samples=4 00:24:38.669 iops : min=13378, max=13824, avg=13687.50, stdev=208.32, samples=4 00:24:38.669 lat (msec) : 2=0.02%, 4=26.20%, 10=73.71%, 20=0.08% 00:24:38.669 cpu : usr=68.13%, sys=26.22%, ctx=24, majf=0, minf=7 00:24:38.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:38.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.669 issued rwts: total=27470,27422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.669 00:24:38.669 Run status group 0 (all jobs): 00:24:38.669 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2003-2003msec 00:24:38.669 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2003-2003msec 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:38.669 21:00:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:38.669 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:38.669 fio-3.35 00:24:38.669 Starting 1 thread 00:24:38.669 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.276 00:24:41.276 test: (groupid=0, jobs=1): err= 0: pid=1691420: Mon Jul 15 21:00:44 2024 00:24:41.276 read: IOPS=9014, BW=141MiB/s (148MB/s)(283MiB/2010msec) 00:24:41.276 slat (usec): min=3, max=110, avg= 3.64, stdev= 1.63 00:24:41.276 clat (usec): min=3096, max=21242, avg=8807.43, stdev=2280.11 00:24:41.276 lat (usec): min=3099, max=21246, avg=8811.07, stdev=2280.36 00:24:41.276 clat percentiles (usec): 00:24:41.276 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6783], 00:24:41.276 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:24:41.276 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11863], 95.00th=[12780], 00:24:41.276 | 99.00th=[14877], 99.50th=[15795], 99.90th=[16712], 99.95th=[17171], 00:24:41.276 | 99.99th=[20055] 00:24:41.276 bw ( KiB/s): min=60704, max=82336, per=49.47%, avg=71344.00, stdev=9622.04, samples=4 00:24:41.276 iops : min= 3794, max= 5146, avg=4459.00, stdev=601.38, samples=4 00:24:41.276 write: IOPS=5385, BW=84.1MiB/s (88.2MB/s)(145MiB/1725msec); 0 zone resets 00:24:41.276 slat (usec): min=40, max=361, avg=41.12, stdev= 7.59 00:24:41.276 clat (usec): min=2989, max=16801, avg=9666.75, stdev=1695.46 00:24:41.276 lat (usec): min=3029, max=16841, avg=9707.88, stdev=1697.53 00:24:41.276 clat percentiles (usec): 00:24:41.276 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8356], 00:24:41.276 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:24:41.276 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11863], 95.00th=[12649], 00:24:41.276 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16319], 99.95th=[16581], 00:24:41.276 | 99.99th=[16909] 00:24:41.276 bw ( KiB/s): min=64928, max=86016, per=86.25%, avg=74320.00, stdev=9601.16, samples=4 00:24:41.276 iops : min= 4058, max= 5376, avg=4645.00, stdev=600.07, samples=4 00:24:41.276 lat (msec) : 4=0.27%, 10=68.52%, 20=31.20%, 50=0.01% 00:24:41.276 cpu : usr=83.33%, sys=13.69%, ctx=17, majf=0, minf=22 00:24:41.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:41.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:41.276 issued rwts: total=18119,9290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:41.276 00:24:41.276 Run status group 0 (all jobs): 00:24:41.276 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (297MB), run=2010-2010msec 00:24:41.276 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=145MiB (152MB), run=1725-1725msec 00:24:41.276 21:00:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.276 rmmod nvme_tcp 00:24:41.276 rmmod nvme_fabrics 00:24:41.276 rmmod nvme_keyring 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1690062 ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1690062 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1690062 ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1690062 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1690062 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1690062' 00:24:41.276 killing process with pid 1690062 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1690062 00:24:41.276 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1690062 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.537 21:00:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.451 21:00:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.451 00:24:43.451 real 0m17.266s 00:24:43.451 user 1m9.352s 00:24:43.451 sys 0m7.322s 00:24:43.712 21:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.712 21:00:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.712 ************************************ 00:24:43.712 END TEST nvmf_fio_host 00:24:43.712 ************************************ 00:24:43.712 21:00:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:43.712 21:00:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:43.712 21:00:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:43.712 21:00:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.712 21:00:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:43.712 ************************************ 00:24:43.712 START TEST nvmf_failover 00:24:43.712 ************************************ 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:43.712 * Looking for test storage... 00:24:43.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:43.712 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.713 21:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.858 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.858 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.858 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.858 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.858 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:51.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:51.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:51.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:51.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.859 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:24:51.860 00:24:51.860 --- 10.0.0.2 ping statistics --- 00:24:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.860 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:24:51.860 00:24:51.860 --- 10.0.0.1 ping statistics --- 00:24:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.860 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1695879 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1695879 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1695879 ']' 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.860 21:00:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 [2024-07-15 21:00:54.797182] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:24:51.860 [2024-07-15 21:00:54.797254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.860 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.860 [2024-07-15 21:00:54.885992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.860 [2024-07-15 21:00:54.979691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.860 [2024-07-15 21:00:54.979748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.860 [2024-07-15 21:00:54.979757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.860 [2024-07-15 21:00:54.979765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.860 [2024-07-15 21:00:54.979772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.860 [2024-07-15 21:00:54.979908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.860 [2024-07-15 21:00:54.980075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.860 [2024-07-15 21:00:54.980075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.860 21:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.122 [2024-07-15 21:00:55.761804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.122 21:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:52.122 Malloc0 00:24:52.122 21:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.382 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.642 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.642 [2024-07-15 21:00:56.450551] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.642 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:52.905 [2024-07-15 21:00:56.622997] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:52.905 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:52.905 [2024-07-15 21:00:56.791522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1696395 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1696395 /var/tmp/bdevperf.sock 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1696395 ']' 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.165 21:00:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:54.105 21:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.105 21:00:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:54.105 21:00:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.366 NVMe0n1 00:24:54.366 21:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.627 00:24:54.627 21:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.627 21:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1696626 00:24:54.627 21:00:58 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:55.589 21:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.589 [2024-07-15 21:00:59.459974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.589 [2024-07-15 21:00:59.460073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2bc50 is same with the state(5) to be set 00:24:55.851 21:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:59.183 21:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.183 00:24:59.183 21:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:59.183 [2024-07-15 21:01:02.948404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 [2024-07-15 21:01:02.948537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d3a0 is same with the state(5) to be set 00:24:59.183 21:01:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:02.491 21:01:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.491 [2024-07-15 21:01:06.111866] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.491 21:01:06 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:03.431 21:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:03.431 [2024-07-15 21:01:07.288700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 [2024-07-15 21:01:07.288863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2da80 is same with the state(5) to be set 00:25:03.431 21:01:07 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1696626 00:25:10.056 0 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1696395 ']' 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1696395' 00:25:10.056 killing process with pid 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1696395 00:25:10.056 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:10.056 [2024-07-15 21:00:56.872073] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:25:10.056 [2024-07-15 21:00:56.872136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696395 ] 00:25:10.056 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.056 [2024-07-15 21:00:56.930702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.056 [2024-07-15 21:00:56.995073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.056 Running I/O for 15 seconds... 00:25:10.056 [2024-07-15 21:00:59.461828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.461986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.056 [2024-07-15 21:00:59.461993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.056 [2024-07-15 21:00:59.462293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.056 [2024-07-15 21:00:59.462301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.057 [2024-07-15 21:00:59.462969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.057 [2024-07-15 21:00:59.462985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.057 [2024-07-15 21:00:59.462995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.058 [2024-07-15 21:00:59.463363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101920 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101928 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101936 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101944 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101960 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.058 [2024-07-15 21:00:59.463673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.058 [2024-07-15 21:00:59.463679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:25:10.058 [2024-07-15 21:00:59.463686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.058 [2024-07-15 21:00:59.463693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101976 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101984 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102000 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102008 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.463980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.463986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.463991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.463998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.464365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.464372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.464379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.464384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.469824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.469838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.469849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.059 [2024-07-15 21:00:59.469855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.059 [2024-07-15 21:00:59.469861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:25:10.059 [2024-07-15 21:00:59.469868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.059 [2024-07-15 21:00:59.469908] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x118f300 was disconnected and freed. reset controller. 00:25:10.060 [2024-07-15 21:00:59.469917] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:10.060 [2024-07-15 21:00:59.469940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.060 [2024-07-15 21:00:59.469948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:00:59.469957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.060 [2024-07-15 21:00:59.469964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:00:59.469972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.060 [2024-07-15 21:00:59.469979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:00:59.469987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.060 [2024-07-15 21:00:59.469993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:00:59.470005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.060 [2024-07-15 21:00:59.470034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116def0 (9): Bad file descriptor 00:25:10.060 [2024-07-15 21:00:59.473576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.060 [2024-07-15 21:00:59.509425] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:10.060 [2024-07-15 21:01:02.949564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.060 [2024-07-15 21:01:02.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.949989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.060 [2024-07-15 21:01:02.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.060 [2024-07-15 21:01:02.950005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.061 [2024-07-15 21:01:02.950400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.061 [2024-07-15 21:01:02.950700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.061 [2024-07-15 21:01:02.950711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.950992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.950999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.062 [2024-07-15 21:01:02.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.062 [2024-07-15 21:01:02.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33664 len:8 PRP1 0x0 PRP2 0x0 00:25:10.062 [2024-07-15 21:01:02.951378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.062 [2024-07-15 21:01:02.951394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.062 [2024-07-15 21:01:02.951400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33672 len:8 PRP1 0x0 PRP2 0x0 00:25:10.062 [2024-07-15 21:01:02.951407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.062 [2024-07-15 21:01:02.951414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.062 [2024-07-15 21:01:02.951419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33680 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33688 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33696 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33704 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33712 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33720 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33728 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33736 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33744 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33752 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33760 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33768 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32880 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32888 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32896 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32904 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32912 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32920 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.063 [2024-07-15 21:01:02.951882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.063 [2024-07-15 21:01:02.951888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32928 len:8 PRP1 0x0 PRP2 0x0 00:25:10.063 [2024-07-15 21:01:02.951894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951930] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1191480 was disconnected and freed. reset controller. 00:25:10.063 [2024-07-15 21:01:02.951939] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:10.063 [2024-07-15 21:01:02.951958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.063 [2024-07-15 21:01:02.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.063 [2024-07-15 21:01:02.951983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.951990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.063 [2024-07-15 21:01:02.951997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.952005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.063 [2024-07-15 21:01:02.952011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:02.952019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.063 [2024-07-15 21:01:02.962571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116def0 (9): Bad file descriptor 00:25:10.063 [2024-07-15 21:01:02.966143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.063 [2024-07-15 21:01:03.037103] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:10.063 [2024-07-15 21:01:07.290259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.063 [2024-07-15 21:01:07.290295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:07.290313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.063 [2024-07-15 21:01:07.290321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:07.290331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.063 [2024-07-15 21:01:07.290338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:07.290348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.063 [2024-07-15 21:01:07.290355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.063 [2024-07-15 21:01:07.290364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.063 [2024-07-15 21:01:07.290371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.064 [2024-07-15 21:01:07.290701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.064 [2024-07-15 21:01:07.290806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.064 [2024-07-15 21:01:07.290812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.290989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.065 [2024-07-15 21:01:07.291206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.065 [2024-07-15 21:01:07.291215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.066 [2024-07-15 21:01:07.291319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.066 [2024-07-15 21:01:07.291904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.066 [2024-07-15 21:01:07.291911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.291927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.291943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.291958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.291974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.291990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.291999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.067 [2024-07-15 21:01:07.292327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68472 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.067 [2024-07-15 21:01:07.292408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.067 [2024-07-15 21:01:07.292424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.067 [2024-07-15 21:01:07.292439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.067 [2024-07-15 21:01:07.292453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116def0 is same with the state(5) to be set 00:25:10.067 [2024-07-15 21:01:07.292733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68480 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67464 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67472 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67480 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67488 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.067 [2024-07-15 21:01:07.292874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.067 [2024-07-15 21:01:07.292880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67496 len:8 PRP1 0x0 PRP2 0x0 00:25:10.067 [2024-07-15 21:01:07.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.067 [2024-07-15 21:01:07.292895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.292900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.292906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67504 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.292920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.292926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.292932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67512 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.292939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.292946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.292951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67520 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.292964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.292971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.292977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.292984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67528 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.292991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.292998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.293004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.293009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67536 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.293016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.293024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.293030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67544 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.293043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.293050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.293056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67552 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.302861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.302875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.302881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67560 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.302895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.302903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.302909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67568 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.302922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.302929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.302935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67576 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.302948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.302955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.302961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67584 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.302973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.302981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.302991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.302997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67592 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67600 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67608 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67616 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67624 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67632 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67640 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67648 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67656 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67720 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67728 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.068 [2024-07-15 21:01:07.303337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67752 len:8 PRP1 0x0 PRP2 0x0 00:25:10.068 [2024-07-15 21:01:07.303343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.068 [2024-07-15 21:01:07.303351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.068 [2024-07-15 21:01:07.303356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67760 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67768 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67776 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67784 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67792 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67800 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67808 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67816 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67824 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67832 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67840 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67848 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67856 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67864 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67872 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67888 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:25:10.069 [2024-07-15 21:01:07.303876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.069 [2024-07-15 21:01:07.303884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.069 [2024-07-15 21:01:07.303889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.069 [2024-07-15 21:01:07.303895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.303901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.303908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.303914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.303920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67936 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.303927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.303934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.303939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.303944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67944 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.303959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.303964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.303970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67952 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.303985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.303991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.303997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67664 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67672 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67680 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67688 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67696 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67704 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67712 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.304448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.304454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.304461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.304470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.312088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.312116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.312146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.312152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.312159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.312166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.312173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.070 [2024-07-15 21:01:07.312179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.070 [2024-07-15 21:01:07.312186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:25:10.070 [2024-07-15 21:01:07.312193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.070 [2024-07-15 21:01:07.312200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68096 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68104 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68112 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68120 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68128 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68136 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68152 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68160 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68168 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68184 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68192 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68200 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68208 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68216 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68224 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68232 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68248 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:25:10.071 [2024-07-15 21:01:07.312820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.071 [2024-07-15 21:01:07.312827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.071 [2024-07-15 21:01:07.312836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.071 [2024-07-15 21:01:07.312842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68288 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68296 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68312 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.312981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.312986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.312992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68320 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.312999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68336 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68344 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68352 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68360 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68368 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68376 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68384 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68392 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68400 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68408 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68416 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68424 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68432 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68440 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68448 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68456 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68464 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.072 [2024-07-15 21:01:07.313468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.072 [2024-07-15 21:01:07.313473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.072 [2024-07-15 21:01:07.313479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68472 len:8 PRP1 0x0 PRP2 0x0 00:25:10.072 [2024-07-15 21:01:07.313485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.073 [2024-07-15 21:01:07.313525] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1191270 was disconnected and freed. reset controller. 00:25:10.073 [2024-07-15 21:01:07.313534] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:10.073 [2024-07-15 21:01:07.313542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.073 [2024-07-15 21:01:07.313584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116def0 (9): Bad file descriptor 00:25:10.073 [2024-07-15 21:01:07.317130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.073 [2024-07-15 21:01:07.483394] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:10.073 00:25:10.073 Latency(us) 00:25:10.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:10.073 Verification LBA range: start 0x0 length 0x4000 00:25:10.073 NVMe0n1 : 15.05 11578.52 45.23 668.03 0.00 10402.90 832.85 45656.75 00:25:10.073 =================================================================================================================== 00:25:10.073 Total : 11578.52 45.23 668.03 0.00 10402.90 832.85 45656.75 00:25:10.073 Received shutdown signal, test time was about 15.000000 seconds 00:25:10.073 00:25:10.073 Latency(us) 00:25:10.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.073 =================================================================================================================== 00:25:10.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1699521 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1699521 /var/tmp/bdevperf.sock 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1699521 ']' 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.073 21:01:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.644 21:01:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.644 21:01:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:10.644 21:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.904 [2024-07-15 21:01:14.658083] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.905 21:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:11.165 [2024-07-15 21:01:14.814445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:11.165 21:01:14 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.165 NVMe0n1 00:25:11.426 21:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.426 00:25:11.426 21:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.000 00:25:12.000 21:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.000 21:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:12.000 21:01:15 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.261 21:01:16 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:15.562 21:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.562 21:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:15.562 21:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1700674 00:25:15.562 21:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:15.562 21:01:19 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1700674 00:25:16.504 0 00:25:16.504 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.504 [2024-07-15 21:01:13.759760] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:25:16.504 [2024-07-15 21:01:13.759820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699521 ] 00:25:16.504 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.504 [2024-07-15 21:01:13.818632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.504 [2024-07-15 21:01:13.881639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.504 [2024-07-15 21:01:15.997725] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:16.504 [2024-07-15 21:01:15.997768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.504 [2024-07-15 21:01:15.997779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.504 [2024-07-15 21:01:15.997788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.504 [2024-07-15 21:01:15.997795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.504 [2024-07-15 21:01:15.997803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.504 [2024-07-15 21:01:15.997810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.504 [2024-07-15 21:01:15.997818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.504 [2024-07-15 21:01:15.997825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.504 [2024-07-15 21:01:15.997832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.504 [2024-07-15 21:01:15.997859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.504 [2024-07-15 21:01:15.997873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ddef0 (9): Bad file descriptor 00:25:16.504 [2024-07-15 21:01:16.043739] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.504 Running I/O for 1 seconds... 00:25:16.504 00:25:16.505 Latency(us) 00:25:16.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.505 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.505 Verification LBA range: start 0x0 length 0x4000 00:25:16.505 NVMe0n1 : 1.01 11616.21 45.38 0.00 0.00 10962.61 2512.21 11414.19 00:25:16.505 =================================================================================================================== 00:25:16.505 Total : 11616.21 45.38 0.00 0.00 10962.61 2512.21 11414.19 00:25:16.505 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.505 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:16.767 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.767 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.767 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:17.027 21:01:20 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.289 21:01:21 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1699521 ']' 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1699521' 00:25:20.585 killing process with pid 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1699521 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:20.585 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.846 rmmod nvme_tcp 00:25:20.846 rmmod nvme_fabrics 00:25:20.846 rmmod nvme_keyring 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1695879 ']' 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1695879 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1695879 ']' 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1695879 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1695879 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1695879' 00:25:20.846 killing process with pid 1695879 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1695879 00:25:20.846 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1695879 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.106 21:01:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.015 21:01:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.015 00:25:23.015 real 0m39.422s 00:25:23.015 user 2m1.934s 00:25:23.015 sys 0m7.950s 00:25:23.015 21:01:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.015 21:01:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.015 ************************************ 00:25:23.015 END TEST nvmf_failover 00:25:23.015 ************************************ 00:25:23.015 21:01:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:23.015 21:01:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:23.015 21:01:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:23.015 21:01:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.015 21:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.275 ************************************ 00:25:23.275 START TEST nvmf_host_discovery 00:25:23.275 ************************************ 00:25:23.275 21:01:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:23.275 * Looking for test storage... 00:25:23.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.275 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.276 21:01:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.858 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:30.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:30.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:30.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:30.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.119 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.120 21:01:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.120 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:25:30.381 00:25:30.381 --- 10.0.0.2 ping statistics --- 00:25:30.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.381 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:25:30.381 00:25:30.381 --- 10.0.0.1 ping statistics --- 00:25:30.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.381 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1705822 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1705822 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1705822 ']' 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.381 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.381 [2024-07-15 21:01:34.168100] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:25:30.381 [2024-07-15 21:01:34.168157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.381 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.381 [2024-07-15 21:01:34.225391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.641 [2024-07-15 21:01:34.278138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.641 [2024-07-15 21:01:34.278171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.641 [2024-07-15 21:01:34.278176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.641 [2024-07-15 21:01:34.278181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.641 [2024-07-15 21:01:34.278185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.641 [2024-07-15 21:01:34.278207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 [2024-07-15 21:01:34.955752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 [2024-07-15 21:01:34.967905] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 null0 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 null1 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.211 21:01:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1706051 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1706051 /tmp/host.sock 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1706051 ']' 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:31.211 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.211 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.211 [2024-07-15 21:01:35.056129] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:25:31.211 [2024-07-15 21:01:35.056176] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706051 ] 00:25:31.211 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.471 [2024-07-15 21:01:35.113850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.471 [2024-07-15 21:01:35.179008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.040 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.041 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.041 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.041 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.300 21:01:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.300 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 [2024-07-15 21:01:36.203017] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:32.569 21:01:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:33.184 [2024-07-15 21:01:36.853345] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.184 [2024-07-15 21:01:36.853367] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.184 [2024-07-15 21:01:36.853382] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.184 [2024-07-15 21:01:36.940668] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:33.184 [2024-07-15 21:01:37.005633] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.184 [2024-07-15 21:01:37.005656] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.754 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.755 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.016 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 [2024-07-15 21:01:37.911422] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.277 [2024-07-15 21:01:37.911845] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:34.277 [2024-07-15 21:01:37.911872] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 21:01:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.277 [2024-07-15 21:01:37.999574] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.277 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.277 [2024-07-15 21:01:38.059375] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.277 [2024-07-15 21:01:38.059393] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.277 [2024-07-15 21:01:38.059399] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:34.278 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:34.278 21:01:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.218 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.480 [2024-07-15 21:01:39.191575] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:35.480 [2024-07-15 21:01:39.191597] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.480 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.480 [2024-07-15 21:01:39.195860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.480 [2024-07-15 21:01:39.195878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.480 [2024-07-15 21:01:39.195887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.480 [2024-07-15 21:01:39.195895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.480 [2024-07-15 21:01:39.195903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.481 [2024-07-15 21:01:39.195910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.481 [2024-07-15 21:01:39.195918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.481 [2024-07-15 21:01:39.195925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.481 [2024-07-15 21:01:39.195932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.481 [2024-07-15 21:01:39.205876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.215914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.216426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.216474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.216492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.216504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.216511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.216519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.216535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.481 [2024-07-15 21:01:39.225970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.226472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.226508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.226519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.226537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.226564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.226572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.226580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.226594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 [2024-07-15 21:01:39.236025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.236586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.236623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.236634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.236657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.236669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.236676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.236683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.236698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 [2024-07-15 21:01:39.246085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.246513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.246527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.246535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.246547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.246557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.246564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.246571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.246582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.481 [2024-07-15 21:01:39.256161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.481 [2024-07-15 21:01:39.257450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.257471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.257480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.257494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.257515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.257522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.257529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.257544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.481 [2024-07-15 21:01:39.266214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.266634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.266646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.266654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.266665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.266700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.266708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.266715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.266726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 [2024-07-15 21:01:39.276270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.481 [2024-07-15 21:01:39.276671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.481 [2024-07-15 21:01:39.276682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10389b0 with addr=10.0.0.2, port=4420 00:25:35.481 [2024-07-15 21:01:39.276690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10389b0 is same with the state(5) to be set 00:25:35.481 [2024-07-15 21:01:39.276701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10389b0 (9): Bad file descriptor 00:25:35.481 [2024-07-15 21:01:39.276711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.481 [2024-07-15 21:01:39.276717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.481 [2024-07-15 21:01:39.276723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.481 [2024-07-15 21:01:39.276740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.481 [2024-07-15 21:01:39.278739] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:35.481 [2024-07-15 21:01:39.278758] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.481 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.482 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.743 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.744 21:01:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 [2024-07-15 21:01:40.624332] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:37.128 [2024-07-15 21:01:40.624359] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:37.128 [2024-07-15 21:01:40.624373] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.128 [2024-07-15 21:01:40.712643] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:37.128 [2024-07-15 21:01:40.979248] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:37.128 [2024-07-15 21:01:40.979282] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 request: 00:25:37.128 { 00:25:37.128 "name": "nvme", 00:25:37.128 "trtype": "tcp", 00:25:37.128 "traddr": "10.0.0.2", 00:25:37.128 "adrfam": "ipv4", 00:25:37.128 "trsvcid": "8009", 00:25:37.128 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.128 "wait_for_attach": true, 00:25:37.128 "method": "bdev_nvme_start_discovery", 00:25:37.128 "req_id": 1 00:25:37.128 } 00:25:37.128 Got JSON-RPC error response 00:25:37.128 response: 00:25:37.128 { 00:25:37.128 "code": -17, 00:25:37.128 "message": "File exists" 00:25:37.128 } 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.128 21:01:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.128 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.388 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.389 request: 00:25:37.389 { 00:25:37.389 "name": "nvme_second", 00:25:37.389 "trtype": "tcp", 00:25:37.389 "traddr": "10.0.0.2", 00:25:37.389 "adrfam": "ipv4", 00:25:37.389 "trsvcid": "8009", 00:25:37.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.389 "wait_for_attach": true, 00:25:37.389 "method": "bdev_nvme_start_discovery", 00:25:37.389 "req_id": 1 00:25:37.389 } 00:25:37.389 Got JSON-RPC error response 00:25:37.389 response: 00:25:37.389 { 00:25:37.389 "code": -17, 00:25:37.389 "message": "File exists" 00:25:37.389 } 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.389 21:01:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.776 [2024-07-15 21:01:42.244132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.776 [2024-07-15 21:01:42.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1076ec0 with addr=10.0.0.2, port=8010 00:25:38.776 [2024-07-15 21:01:42.244178] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:38.776 [2024-07-15 21:01:42.244185] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:38.776 [2024-07-15 21:01:42.244192] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:39.714 [2024-07-15 21:01:43.246424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.714 [2024-07-15 21:01:43.246447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1076ec0 with addr=10.0.0.2, port=8010 00:25:39.714 [2024-07-15 21:01:43.246458] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:39.714 [2024-07-15 21:01:43.246464] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:39.714 [2024-07-15 21:01:43.246470] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:40.651 [2024-07-15 21:01:44.248367] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:40.651 request: 00:25:40.651 { 00:25:40.651 "name": "nvme_second", 00:25:40.651 "trtype": "tcp", 00:25:40.651 "traddr": "10.0.0.2", 00:25:40.651 "adrfam": "ipv4", 00:25:40.651 "trsvcid": "8010", 00:25:40.651 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.651 "wait_for_attach": false, 00:25:40.651 "attach_timeout_ms": 3000, 00:25:40.651 "method": "bdev_nvme_start_discovery", 00:25:40.651 "req_id": 1 00:25:40.651 } 00:25:40.651 Got JSON-RPC error response 00:25:40.651 response: 00:25:40.651 { 00:25:40.651 "code": -110, 00:25:40.651 "message": "Connection timed out" 00:25:40.651 } 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1706051 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.651 rmmod nvme_tcp 00:25:40.651 rmmod nvme_fabrics 00:25:40.651 rmmod nvme_keyring 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1705822 ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1705822 ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1705822' 00:25:40.651 killing process with pid 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1705822 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.651 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.912 21:01:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.912 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.912 21:01:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.820 21:01:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.820 00:25:42.820 real 0m19.682s 00:25:42.820 user 0m23.580s 00:25:42.820 sys 0m6.508s 00:25:42.820 21:01:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.820 21:01:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 ************************************ 00:25:42.820 END TEST nvmf_host_discovery 00:25:42.820 ************************************ 00:25:42.820 21:01:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:42.820 21:01:46 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:42.820 21:01:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:42.820 21:01:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.820 21:01:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 ************************************ 00:25:42.820 START TEST nvmf_host_multipath_status 00:25:42.820 ************************************ 00:25:42.820 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:43.082 * Looking for test storage... 00:25:43.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.082 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.083 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:43.083 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.083 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.083 21:01:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:51.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:51.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:51.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:51.230 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:25:51.230 00:25:51.230 --- 10.0.0.2 ping statistics --- 00:25:51.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.230 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:25:51.230 00:25:51.230 --- 10.0.0.1 ping statistics --- 00:25:51.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.230 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.230 21:01:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1712039 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1712039 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1712039 ']' 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.230 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.230 [2024-07-15 21:01:54.061830] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:25:51.231 [2024-07-15 21:01:54.061882] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.231 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.231 [2024-07-15 21:01:54.126748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:51.231 [2024-07-15 21:01:54.191494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.231 [2024-07-15 21:01:54.191533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.231 [2024-07-15 21:01:54.191541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.231 [2024-07-15 21:01:54.191547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.231 [2024-07-15 21:01:54.191553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.231 [2024-07-15 21:01:54.191688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.231 [2024-07-15 21:01:54.191689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1712039 00:25:51.231 21:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:51.231 [2024-07-15 21:01:55.003342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.231 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:51.493 Malloc0 00:25:51.493 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:51.493 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.754 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.754 [2024-07-15 21:01:55.631995] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.754 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:52.016 [2024-07-15 21:01:55.788363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1712425 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1712425 /var/tmp/bdevperf.sock 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1712425 ']' 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:52.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.016 21:01:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.959 21:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.959 21:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:52.959 21:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:52.959 21:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:53.219 Nvme0n1 00:25:53.480 21:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:53.480 Nvme0n1 00:25:53.742 21:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:53.742 21:01:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:55.658 21:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:55.658 21:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:55.919 21:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.919 21:01:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.308 21:02:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.308 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.308 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.308 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.308 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.602 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.602 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.602 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.602 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.602 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.603 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.603 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.603 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.864 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.864 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.864 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.864 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.124 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.124 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:58.124 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:58.124 21:02:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.384 21:02:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:59.325 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:59.325 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:59.325 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.325 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.585 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.585 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.585 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.585 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.845 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.106 21:02:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.366 21:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.366 21:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:00.366 21:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.627 21:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:00.627 21:02:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.010 21:02:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.271 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.271 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.271 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.271 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.531 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.791 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.791 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:02.791 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.791 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.051 21:02:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:03.990 21:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:03.990 21:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.990 21:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.990 21:02:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.250 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.250 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.250 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.250 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.510 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.770 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.770 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.770 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.770 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:05.029 21:02:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:05.289 21:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:05.549 21:02:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.490 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.491 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.491 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.752 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.752 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.752 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.752 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.013 21:02:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.274 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.274 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:07.274 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.274 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.534 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.534 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:07.534 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:07.534 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.794 21:02:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:08.750 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:08.750 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:08.750 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.750 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.012 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.012 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.012 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.012 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.274 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.274 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.274 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.274 21:02:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.274 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.274 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.274 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.274 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.534 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.795 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.795 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:10.055 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:10.055 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:10.055 21:02:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:10.315 21:02:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:11.258 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:11.258 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:11.258 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.258 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:11.518 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.518 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:11.518 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.518 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.779 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.039 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.039 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:12.039 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.039 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.300 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.300 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.300 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.300 21:02:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.300 21:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.300 21:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:12.300 21:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.561 21:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:12.822 21:02:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.763 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.046 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.046 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.046 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.046 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.323 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.323 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.323 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.323 21:02:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.323 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.323 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.323 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.323 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.588 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.588 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.588 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.588 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.848 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.848 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:14.848 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.848 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:15.107 21:02:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:16.047 21:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:16.047 21:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.047 21:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.047 21:02:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.307 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.307 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:16.307 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.307 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.567 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.828 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.088 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.088 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:17.088 21:02:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.347 21:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:17.347 21:02:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.731 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.991 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.991 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.991 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.991 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.252 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.252 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.252 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.252 21:02:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.252 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.252 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:19.252 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.252 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.512 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.512 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1712425 00:26:19.512 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1712425 ']' 00:26:19.512 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1712425 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1712425 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1712425' 00:26:19.513 killing process with pid 1712425 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1712425 00:26:19.513 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1712425 00:26:19.513 Connection closed with partial response: 00:26:19.513 00:26:19.513 00:26:19.780 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1712425 00:26:19.780 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.780 [2024-07-15 21:01:55.850785] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:26:19.780 [2024-07-15 21:01:55.850844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712425 ] 00:26:19.780 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.780 [2024-07-15 21:01:55.900482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.780 [2024-07-15 21:01:55.952761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.780 Running I/O for 90 seconds... 00:26:19.780 [2024-07-15 21:02:09.021984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:19.780 [2024-07-15 21:02:09.022273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.780 [2024-07-15 21:02:09.022282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.022761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.022767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.781 [2024-07-15 21:02:09.023860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.023879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.023897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.023918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.781 [2024-07-15 21:02:09.023936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:19.781 [2024-07-15 21:02:09.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.023955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.023970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.023989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.023994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.782 [2024-07-15 21:02:09.024723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.782 [2024-07-15 21:02:09.024852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.782 [2024-07-15 21:02:09.024868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:09.024873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:09.024895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.024918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.024940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.024962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.024983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.024999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.025005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.025021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.025026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.025043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.025048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:09.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:09.025070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.204368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.204383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.204567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.204583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.204725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.783 [2024-07-15 21:02:21.204732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.205660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.205674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.205686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.205691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.205701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.205707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.205717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.205722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.783 [2024-07-15 21:02:21.205733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.783 [2024-07-15 21:02:21.205738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.783 Received shutdown signal, test time was about 25.775400 seconds 00:26:19.783 00:26:19.783 Latency(us) 00:26:19.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.783 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:19.783 Verification LBA range: start 0x0 length 0x4000 00:26:19.783 Nvme0n1 : 25.77 10547.71 41.20 0.00 0.00 12116.74 512.00 3019898.88 00:26:19.783 =================================================================================================================== 00:26:19.783 Total : 10547.71 41.20 0.00 0.00 12116.74 512.00 3019898.88 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.783 rmmod nvme_tcp 00:26:19.783 rmmod nvme_fabrics 00:26:19.783 rmmod nvme_keyring 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.783 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1712039 ']' 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1712039 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1712039 ']' 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1712039 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.784 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1712039 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1712039' 00:26:20.044 killing process with pid 1712039 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1712039 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1712039 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.044 21:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.624 21:02:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:22.624 00:26:22.624 real 0m39.240s 00:26:22.624 user 1m38.033s 00:26:22.624 sys 0m12.097s 00:26:22.624 21:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:22.624 21:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.624 ************************************ 00:26:22.624 END TEST nvmf_host_multipath_status 00:26:22.624 ************************************ 00:26:22.624 21:02:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:22.624 21:02:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.624 21:02:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:22.624 21:02:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.624 21:02:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.624 ************************************ 00:26:22.624 START TEST nvmf_discovery_remove_ifc 00:26:22.624 ************************************ 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.624 * Looking for test storage... 00:26:22.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.624 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.625 21:02:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:29.250 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:29.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:29.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:29.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:29.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:29.251 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:29.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:26:29.548 00:26:29.548 --- 10.0.0.2 ping statistics --- 00:26:29.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.548 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:26:29.548 00:26:29.548 --- 10.0.0.1 ping statistics --- 00:26:29.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.548 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:29.548 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1722242 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1722242 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1722242 ']' 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.808 21:02:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.808 [2024-07-15 21:02:33.535736] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:26:29.808 [2024-07-15 21:02:33.535798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.808 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.808 [2024-07-15 21:02:33.624026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.069 [2024-07-15 21:02:33.714779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.069 [2024-07-15 21:02:33.714836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.069 [2024-07-15 21:02:33.714844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.069 [2024-07-15 21:02:33.714851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.069 [2024-07-15 21:02:33.714857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.069 [2024-07-15 21:02:33.714883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.641 [2024-07-15 21:02:34.398329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.641 [2024-07-15 21:02:34.406519] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:30.641 null0 00:26:30.641 [2024-07-15 21:02:34.438508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1722362 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1722362 /tmp/host.sock 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1722362 ']' 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:30.641 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.641 21:02:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.641 [2024-07-15 21:02:34.523222] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:26:30.641 [2024-07-15 21:02:34.523291] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722362 ] 00:26:30.902 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.902 [2024-07-15 21:02:34.586872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.902 [2024-07-15 21:02:34.661796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.472 21:02:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 [2024-07-15 21:02:36.366715] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:32.855 [2024-07-15 21:02:36.366739] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:32.855 [2024-07-15 21:02:36.366753] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.855 [2024-07-15 21:02:36.455022] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:32.855 [2024-07-15 21:02:36.559829] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:32.855 [2024-07-15 21:02:36.559877] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:32.855 [2024-07-15 21:02:36.559901] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:32.855 [2024-07-15 21:02:36.559915] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.855 [2024-07-15 21:02:36.559934] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.855 [2024-07-15 21:02:36.566716] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc1a7b0 was disconnected and freed. delete nvme_qpair. 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:32.855 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.115 21:02:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.056 21:02:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.254 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.254 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.254 21:02:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.221 21:02:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.163 21:02:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.163 21:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.163 21:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.163 21:02:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.140 [2024-07-15 21:02:42.000299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:38.140 [2024-07-15 21:02:42.000347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.140 [2024-07-15 21:02:42.000359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.140 [2024-07-15 21:02:42.000369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.140 [2024-07-15 21:02:42.000376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.140 [2024-07-15 21:02:42.000384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.140 [2024-07-15 21:02:42.000392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.140 [2024-07-15 21:02:42.000399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.140 [2024-07-15 21:02:42.000406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.140 [2024-07-15 21:02:42.000415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.140 [2024-07-15 21:02:42.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.140 [2024-07-15 21:02:42.000429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1040 is same with the state(5) to be set 00:26:38.140 [2024-07-15 21:02:42.010320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1040 (9): Bad file descriptor 00:26:38.140 [2024-07-15 21:02:42.020360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.400 21:02:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.340 [2024-07-15 21:02:43.081148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:39.340 [2024-07-15 21:02:43.081187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe1040 with addr=10.0.0.2, port=4420 00:26:39.340 [2024-07-15 21:02:43.081198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1040 is same with the state(5) to be set 00:26:39.340 [2024-07-15 21:02:43.081219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1040 (9): Bad file descriptor 00:26:39.340 [2024-07-15 21:02:43.081583] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.340 [2024-07-15 21:02:43.081602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:39.340 [2024-07-15 21:02:43.081610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:39.340 [2024-07-15 21:02:43.081619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:39.340 [2024-07-15 21:02:43.081635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.340 [2024-07-15 21:02:43.081649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:39.340 21:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.340 21:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.340 21:02:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.281 [2024-07-15 21:02:44.084025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.281 [2024-07-15 21:02:44.084044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.281 [2024-07-15 21:02:44.084052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.281 [2024-07-15 21:02:44.084060] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:40.281 [2024-07-15 21:02:44.084071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.281 [2024-07-15 21:02:44.084092] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:40.281 [2024-07-15 21:02:44.084112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.281 [2024-07-15 21:02:44.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.281 [2024-07-15 21:02:44.084134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.281 [2024-07-15 21:02:44.084142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.281 [2024-07-15 21:02:44.084150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.281 [2024-07-15 21:02:44.084157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.281 [2024-07-15 21:02:44.084166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.281 [2024-07-15 21:02:44.084173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.281 [2024-07-15 21:02:44.084181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.281 [2024-07-15 21:02:44.084188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.281 [2024-07-15 21:02:44.084195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:40.281 [2024-07-15 21:02:44.084578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe04c0 (9): Bad file descriptor 00:26:40.281 [2024-07-15 21:02:44.085589] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:40.281 [2024-07-15 21:02:44.085600] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:40.281 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.542 21:02:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.484 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.485 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.485 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.485 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:41.485 21:02:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.424 [2024-07-15 21:02:46.143342] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.424 [2024-07-15 21:02:46.143371] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.424 [2024-07-15 21:02:46.143385] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.424 [2024-07-15 21:02:46.230661] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.684 21:02:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.684 [2024-07-15 21:02:46.455150] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.684 [2024-07-15 21:02:46.455193] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.684 [2024-07-15 21:02:46.455222] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.684 [2024-07-15 21:02:46.455237] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:42.684 [2024-07-15 21:02:46.455245] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.684 [2024-07-15 21:02:46.500954] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbf7310 was disconnected and freed. delete nvme_qpair. 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1722362 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1722362 ']' 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1722362 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.625 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722362 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722362' 00:26:43.884 killing process with pid 1722362 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1722362 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1722362 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:43.884 rmmod nvme_tcp 00:26:43.884 rmmod nvme_fabrics 00:26:43.884 rmmod nvme_keyring 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1722242 ']' 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1722242 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1722242 ']' 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1722242 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.884 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722242 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722242' 00:26:44.143 killing process with pid 1722242 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1722242 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1722242 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.143 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.144 21:02:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.684 21:02:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.684 00:26:46.684 real 0m23.958s 00:26:46.684 user 0m29.046s 00:26:46.684 sys 0m6.845s 00:26:46.684 21:02:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.684 21:02:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.684 ************************************ 00:26:46.684 END TEST nvmf_discovery_remove_ifc 00:26:46.684 ************************************ 00:26:46.684 21:02:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:46.684 21:02:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.684 21:02:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:46.684 21:02:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.684 21:02:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.684 ************************************ 00:26:46.684 START TEST nvmf_identify_kernel_target 00:26:46.684 ************************************ 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.684 * Looking for test storage... 00:26:46.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.684 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.685 21:02:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.294 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.294 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.294 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.295 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.295 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.295 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:26:53.556 00:26:53.556 --- 10.0.0.2 ping statistics --- 00:26:53.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.556 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:26:53.556 00:26:53.556 --- 10.0.0.1 ping statistics --- 00:26:53.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.556 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:53.556 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:53.816 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:53.816 21:02:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:56.359 Waiting for block devices as requested 00:26:56.619 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:56.619 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:56.619 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:56.880 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:56.880 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:56.880 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.141 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:57.141 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:57.141 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:57.402 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.402 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.402 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.684 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:57.684 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:57.684 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.684 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:57.945 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.207 No valid GPT data, bailing 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.207 21:03:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:58.207 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:58.469 00:26:58.469 Discovery Log Number of Records 2, Generation counter 2 00:26:58.469 =====Discovery Log Entry 0====== 00:26:58.469 trtype: tcp 00:26:58.469 adrfam: ipv4 00:26:58.469 subtype: current discovery subsystem 00:26:58.469 treq: not specified, sq flow control disable supported 00:26:58.469 portid: 1 00:26:58.469 trsvcid: 4420 00:26:58.469 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:58.469 traddr: 10.0.0.1 00:26:58.469 eflags: none 00:26:58.469 sectype: none 00:26:58.469 =====Discovery Log Entry 1====== 00:26:58.469 trtype: tcp 00:26:58.469 adrfam: ipv4 00:26:58.469 subtype: nvme subsystem 00:26:58.469 treq: not specified, sq flow control disable supported 00:26:58.469 portid: 1 00:26:58.469 trsvcid: 4420 00:26:58.469 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:58.469 traddr: 10.0.0.1 00:26:58.469 eflags: none 00:26:58.469 sectype: none 00:26:58.469 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:58.469 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:58.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.469 ===================================================== 00:26:58.469 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:58.469 ===================================================== 00:26:58.469 Controller Capabilities/Features 00:26:58.469 ================================ 00:26:58.469 Vendor ID: 0000 00:26:58.469 Subsystem Vendor ID: 0000 00:26:58.469 Serial Number: c3af0d007d027bbc749a 00:26:58.469 Model Number: Linux 00:26:58.469 Firmware Version: 6.7.0-68 00:26:58.469 Recommended Arb Burst: 0 00:26:58.469 IEEE OUI Identifier: 00 00 00 00:26:58.469 Multi-path I/O 00:26:58.469 May have multiple subsystem ports: No 00:26:58.469 May have multiple controllers: No 00:26:58.470 Associated with SR-IOV VF: No 00:26:58.470 Max Data Transfer Size: Unlimited 00:26:58.470 Max Number of Namespaces: 0 00:26:58.470 Max Number of I/O Queues: 1024 00:26:58.470 NVMe Specification Version (VS): 1.3 00:26:58.470 NVMe Specification Version (Identify): 1.3 00:26:58.470 Maximum Queue Entries: 1024 00:26:58.470 Contiguous Queues Required: No 00:26:58.470 Arbitration Mechanisms Supported 00:26:58.470 Weighted Round Robin: Not Supported 00:26:58.470 Vendor Specific: Not Supported 00:26:58.470 Reset Timeout: 7500 ms 00:26:58.470 Doorbell Stride: 4 bytes 00:26:58.470 NVM Subsystem Reset: Not Supported 00:26:58.470 Command Sets Supported 00:26:58.470 NVM Command Set: Supported 00:26:58.470 Boot Partition: Not Supported 00:26:58.470 Memory Page Size Minimum: 4096 bytes 00:26:58.470 Memory Page Size Maximum: 4096 bytes 00:26:58.470 Persistent Memory Region: Not Supported 00:26:58.470 Optional Asynchronous Events Supported 00:26:58.470 Namespace Attribute Notices: Not Supported 00:26:58.470 Firmware Activation Notices: Not Supported 00:26:58.470 ANA Change Notices: Not Supported 00:26:58.470 PLE Aggregate Log Change Notices: Not Supported 00:26:58.470 LBA Status Info Alert Notices: Not Supported 00:26:58.470 EGE Aggregate Log Change Notices: Not Supported 00:26:58.470 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.470 Zone Descriptor Change Notices: Not Supported 00:26:58.470 Discovery Log Change Notices: Supported 00:26:58.470 Controller Attributes 00:26:58.470 128-bit Host Identifier: Not Supported 00:26:58.470 Non-Operational Permissive Mode: Not Supported 00:26:58.470 NVM Sets: Not Supported 00:26:58.470 Read Recovery Levels: Not Supported 00:26:58.470 Endurance Groups: Not Supported 00:26:58.470 Predictable Latency Mode: Not Supported 00:26:58.470 Traffic Based Keep ALive: Not Supported 00:26:58.470 Namespace Granularity: Not Supported 00:26:58.470 SQ Associations: Not Supported 00:26:58.470 UUID List: Not Supported 00:26:58.470 Multi-Domain Subsystem: Not Supported 00:26:58.470 Fixed Capacity Management: Not Supported 00:26:58.470 Variable Capacity Management: Not Supported 00:26:58.470 Delete Endurance Group: Not Supported 00:26:58.470 Delete NVM Set: Not Supported 00:26:58.470 Extended LBA Formats Supported: Not Supported 00:26:58.470 Flexible Data Placement Supported: Not Supported 00:26:58.470 00:26:58.470 Controller Memory Buffer Support 00:26:58.470 ================================ 00:26:58.470 Supported: No 00:26:58.470 00:26:58.470 Persistent Memory Region Support 00:26:58.470 ================================ 00:26:58.470 Supported: No 00:26:58.470 00:26:58.470 Admin Command Set Attributes 00:26:58.470 ============================ 00:26:58.470 Security Send/Receive: Not Supported 00:26:58.470 Format NVM: Not Supported 00:26:58.470 Firmware Activate/Download: Not Supported 00:26:58.470 Namespace Management: Not Supported 00:26:58.470 Device Self-Test: Not Supported 00:26:58.470 Directives: Not Supported 00:26:58.470 NVMe-MI: Not Supported 00:26:58.470 Virtualization Management: Not Supported 00:26:58.470 Doorbell Buffer Config: Not Supported 00:26:58.470 Get LBA Status Capability: Not Supported 00:26:58.470 Command & Feature Lockdown Capability: Not Supported 00:26:58.470 Abort Command Limit: 1 00:26:58.470 Async Event Request Limit: 1 00:26:58.470 Number of Firmware Slots: N/A 00:26:58.470 Firmware Slot 1 Read-Only: N/A 00:26:58.470 Firmware Activation Without Reset: N/A 00:26:58.470 Multiple Update Detection Support: N/A 00:26:58.470 Firmware Update Granularity: No Information Provided 00:26:58.470 Per-Namespace SMART Log: No 00:26:58.470 Asymmetric Namespace Access Log Page: Not Supported 00:26:58.470 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:58.470 Command Effects Log Page: Not Supported 00:26:58.470 Get Log Page Extended Data: Supported 00:26:58.470 Telemetry Log Pages: Not Supported 00:26:58.470 Persistent Event Log Pages: Not Supported 00:26:58.470 Supported Log Pages Log Page: May Support 00:26:58.470 Commands Supported & Effects Log Page: Not Supported 00:26:58.470 Feature Identifiers & Effects Log Page:May Support 00:26:58.470 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.470 Data Area 4 for Telemetry Log: Not Supported 00:26:58.470 Error Log Page Entries Supported: 1 00:26:58.470 Keep Alive: Not Supported 00:26:58.470 00:26:58.470 NVM Command Set Attributes 00:26:58.470 ========================== 00:26:58.470 Submission Queue Entry Size 00:26:58.470 Max: 1 00:26:58.470 Min: 1 00:26:58.470 Completion Queue Entry Size 00:26:58.470 Max: 1 00:26:58.470 Min: 1 00:26:58.470 Number of Namespaces: 0 00:26:58.470 Compare Command: Not Supported 00:26:58.470 Write Uncorrectable Command: Not Supported 00:26:58.470 Dataset Management Command: Not Supported 00:26:58.470 Write Zeroes Command: Not Supported 00:26:58.470 Set Features Save Field: Not Supported 00:26:58.470 Reservations: Not Supported 00:26:58.470 Timestamp: Not Supported 00:26:58.470 Copy: Not Supported 00:26:58.470 Volatile Write Cache: Not Present 00:26:58.470 Atomic Write Unit (Normal): 1 00:26:58.470 Atomic Write Unit (PFail): 1 00:26:58.470 Atomic Compare & Write Unit: 1 00:26:58.470 Fused Compare & Write: Not Supported 00:26:58.470 Scatter-Gather List 00:26:58.470 SGL Command Set: Supported 00:26:58.470 SGL Keyed: Not Supported 00:26:58.470 SGL Bit Bucket Descriptor: Not Supported 00:26:58.470 SGL Metadata Pointer: Not Supported 00:26:58.470 Oversized SGL: Not Supported 00:26:58.470 SGL Metadata Address: Not Supported 00:26:58.470 SGL Offset: Supported 00:26:58.470 Transport SGL Data Block: Not Supported 00:26:58.470 Replay Protected Memory Block: Not Supported 00:26:58.470 00:26:58.470 Firmware Slot Information 00:26:58.470 ========================= 00:26:58.470 Active slot: 0 00:26:58.470 00:26:58.470 00:26:58.470 Error Log 00:26:58.470 ========= 00:26:58.470 00:26:58.470 Active Namespaces 00:26:58.470 ================= 00:26:58.470 Discovery Log Page 00:26:58.470 ================== 00:26:58.470 Generation Counter: 2 00:26:58.470 Number of Records: 2 00:26:58.470 Record Format: 0 00:26:58.470 00:26:58.470 Discovery Log Entry 0 00:26:58.470 ---------------------- 00:26:58.470 Transport Type: 3 (TCP) 00:26:58.470 Address Family: 1 (IPv4) 00:26:58.470 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:58.470 Entry Flags: 00:26:58.470 Duplicate Returned Information: 0 00:26:58.470 Explicit Persistent Connection Support for Discovery: 0 00:26:58.470 Transport Requirements: 00:26:58.470 Secure Channel: Not Specified 00:26:58.470 Port ID: 1 (0x0001) 00:26:58.470 Controller ID: 65535 (0xffff) 00:26:58.470 Admin Max SQ Size: 32 00:26:58.470 Transport Service Identifier: 4420 00:26:58.470 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:58.470 Transport Address: 10.0.0.1 00:26:58.470 Discovery Log Entry 1 00:26:58.470 ---------------------- 00:26:58.470 Transport Type: 3 (TCP) 00:26:58.470 Address Family: 1 (IPv4) 00:26:58.470 Subsystem Type: 2 (NVM Subsystem) 00:26:58.470 Entry Flags: 00:26:58.470 Duplicate Returned Information: 0 00:26:58.470 Explicit Persistent Connection Support for Discovery: 0 00:26:58.470 Transport Requirements: 00:26:58.470 Secure Channel: Not Specified 00:26:58.470 Port ID: 1 (0x0001) 00:26:58.470 Controller ID: 65535 (0xffff) 00:26:58.470 Admin Max SQ Size: 32 00:26:58.470 Transport Service Identifier: 4420 00:26:58.470 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:58.470 Transport Address: 10.0.0.1 00:26:58.470 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.470 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.470 get_feature(0x01) failed 00:26:58.470 get_feature(0x02) failed 00:26:58.470 get_feature(0x04) failed 00:26:58.470 ===================================================== 00:26:58.470 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.470 ===================================================== 00:26:58.470 Controller Capabilities/Features 00:26:58.470 ================================ 00:26:58.470 Vendor ID: 0000 00:26:58.470 Subsystem Vendor ID: 0000 00:26:58.470 Serial Number: 52e2521c5cd1149c753c 00:26:58.470 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.470 Firmware Version: 6.7.0-68 00:26:58.470 Recommended Arb Burst: 6 00:26:58.470 IEEE OUI Identifier: 00 00 00 00:26:58.470 Multi-path I/O 00:26:58.470 May have multiple subsystem ports: Yes 00:26:58.470 May have multiple controllers: Yes 00:26:58.470 Associated with SR-IOV VF: No 00:26:58.470 Max Data Transfer Size: Unlimited 00:26:58.470 Max Number of Namespaces: 1024 00:26:58.470 Max Number of I/O Queues: 128 00:26:58.470 NVMe Specification Version (VS): 1.3 00:26:58.470 NVMe Specification Version (Identify): 1.3 00:26:58.470 Maximum Queue Entries: 1024 00:26:58.470 Contiguous Queues Required: No 00:26:58.470 Arbitration Mechanisms Supported 00:26:58.470 Weighted Round Robin: Not Supported 00:26:58.470 Vendor Specific: Not Supported 00:26:58.470 Reset Timeout: 7500 ms 00:26:58.470 Doorbell Stride: 4 bytes 00:26:58.470 NVM Subsystem Reset: Not Supported 00:26:58.470 Command Sets Supported 00:26:58.470 NVM Command Set: Supported 00:26:58.471 Boot Partition: Not Supported 00:26:58.471 Memory Page Size Minimum: 4096 bytes 00:26:58.471 Memory Page Size Maximum: 4096 bytes 00:26:58.471 Persistent Memory Region: Not Supported 00:26:58.471 Optional Asynchronous Events Supported 00:26:58.471 Namespace Attribute Notices: Supported 00:26:58.471 Firmware Activation Notices: Not Supported 00:26:58.471 ANA Change Notices: Supported 00:26:58.471 PLE Aggregate Log Change Notices: Not Supported 00:26:58.471 LBA Status Info Alert Notices: Not Supported 00:26:58.471 EGE Aggregate Log Change Notices: Not Supported 00:26:58.471 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.471 Zone Descriptor Change Notices: Not Supported 00:26:58.471 Discovery Log Change Notices: Not Supported 00:26:58.471 Controller Attributes 00:26:58.471 128-bit Host Identifier: Supported 00:26:58.471 Non-Operational Permissive Mode: Not Supported 00:26:58.471 NVM Sets: Not Supported 00:26:58.471 Read Recovery Levels: Not Supported 00:26:58.471 Endurance Groups: Not Supported 00:26:58.471 Predictable Latency Mode: Not Supported 00:26:58.471 Traffic Based Keep ALive: Supported 00:26:58.471 Namespace Granularity: Not Supported 00:26:58.471 SQ Associations: Not Supported 00:26:58.471 UUID List: Not Supported 00:26:58.471 Multi-Domain Subsystem: Not Supported 00:26:58.471 Fixed Capacity Management: Not Supported 00:26:58.471 Variable Capacity Management: Not Supported 00:26:58.471 Delete Endurance Group: Not Supported 00:26:58.471 Delete NVM Set: Not Supported 00:26:58.471 Extended LBA Formats Supported: Not Supported 00:26:58.471 Flexible Data Placement Supported: Not Supported 00:26:58.471 00:26:58.471 Controller Memory Buffer Support 00:26:58.471 ================================ 00:26:58.471 Supported: No 00:26:58.471 00:26:58.471 Persistent Memory Region Support 00:26:58.471 ================================ 00:26:58.471 Supported: No 00:26:58.471 00:26:58.471 Admin Command Set Attributes 00:26:58.471 ============================ 00:26:58.471 Security Send/Receive: Not Supported 00:26:58.471 Format NVM: Not Supported 00:26:58.471 Firmware Activate/Download: Not Supported 00:26:58.471 Namespace Management: Not Supported 00:26:58.471 Device Self-Test: Not Supported 00:26:58.471 Directives: Not Supported 00:26:58.471 NVMe-MI: Not Supported 00:26:58.471 Virtualization Management: Not Supported 00:26:58.471 Doorbell Buffer Config: Not Supported 00:26:58.471 Get LBA Status Capability: Not Supported 00:26:58.471 Command & Feature Lockdown Capability: Not Supported 00:26:58.471 Abort Command Limit: 4 00:26:58.471 Async Event Request Limit: 4 00:26:58.471 Number of Firmware Slots: N/A 00:26:58.471 Firmware Slot 1 Read-Only: N/A 00:26:58.471 Firmware Activation Without Reset: N/A 00:26:58.471 Multiple Update Detection Support: N/A 00:26:58.471 Firmware Update Granularity: No Information Provided 00:26:58.471 Per-Namespace SMART Log: Yes 00:26:58.471 Asymmetric Namespace Access Log Page: Supported 00:26:58.471 ANA Transition Time : 10 sec 00:26:58.471 00:26:58.471 Asymmetric Namespace Access Capabilities 00:26:58.471 ANA Optimized State : Supported 00:26:58.471 ANA Non-Optimized State : Supported 00:26:58.471 ANA Inaccessible State : Supported 00:26:58.471 ANA Persistent Loss State : Supported 00:26:58.471 ANA Change State : Supported 00:26:58.471 ANAGRPID is not changed : No 00:26:58.471 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:58.471 00:26:58.471 ANA Group Identifier Maximum : 128 00:26:58.471 Number of ANA Group Identifiers : 128 00:26:58.471 Max Number of Allowed Namespaces : 1024 00:26:58.471 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:58.471 Command Effects Log Page: Supported 00:26:58.471 Get Log Page Extended Data: Supported 00:26:58.471 Telemetry Log Pages: Not Supported 00:26:58.471 Persistent Event Log Pages: Not Supported 00:26:58.471 Supported Log Pages Log Page: May Support 00:26:58.471 Commands Supported & Effects Log Page: Not Supported 00:26:58.471 Feature Identifiers & Effects Log Page:May Support 00:26:58.471 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.471 Data Area 4 for Telemetry Log: Not Supported 00:26:58.471 Error Log Page Entries Supported: 128 00:26:58.471 Keep Alive: Supported 00:26:58.471 Keep Alive Granularity: 1000 ms 00:26:58.471 00:26:58.471 NVM Command Set Attributes 00:26:58.471 ========================== 00:26:58.471 Submission Queue Entry Size 00:26:58.471 Max: 64 00:26:58.471 Min: 64 00:26:58.471 Completion Queue Entry Size 00:26:58.471 Max: 16 00:26:58.471 Min: 16 00:26:58.471 Number of Namespaces: 1024 00:26:58.471 Compare Command: Not Supported 00:26:58.471 Write Uncorrectable Command: Not Supported 00:26:58.471 Dataset Management Command: Supported 00:26:58.471 Write Zeroes Command: Supported 00:26:58.471 Set Features Save Field: Not Supported 00:26:58.471 Reservations: Not Supported 00:26:58.471 Timestamp: Not Supported 00:26:58.471 Copy: Not Supported 00:26:58.471 Volatile Write Cache: Present 00:26:58.471 Atomic Write Unit (Normal): 1 00:26:58.471 Atomic Write Unit (PFail): 1 00:26:58.471 Atomic Compare & Write Unit: 1 00:26:58.471 Fused Compare & Write: Not Supported 00:26:58.471 Scatter-Gather List 00:26:58.471 SGL Command Set: Supported 00:26:58.471 SGL Keyed: Not Supported 00:26:58.471 SGL Bit Bucket Descriptor: Not Supported 00:26:58.471 SGL Metadata Pointer: Not Supported 00:26:58.471 Oversized SGL: Not Supported 00:26:58.471 SGL Metadata Address: Not Supported 00:26:58.471 SGL Offset: Supported 00:26:58.471 Transport SGL Data Block: Not Supported 00:26:58.471 Replay Protected Memory Block: Not Supported 00:26:58.471 00:26:58.471 Firmware Slot Information 00:26:58.471 ========================= 00:26:58.471 Active slot: 0 00:26:58.471 00:26:58.471 Asymmetric Namespace Access 00:26:58.471 =========================== 00:26:58.471 Change Count : 0 00:26:58.471 Number of ANA Group Descriptors : 1 00:26:58.471 ANA Group Descriptor : 0 00:26:58.471 ANA Group ID : 1 00:26:58.471 Number of NSID Values : 1 00:26:58.471 Change Count : 0 00:26:58.471 ANA State : 1 00:26:58.471 Namespace Identifier : 1 00:26:58.471 00:26:58.471 Commands Supported and Effects 00:26:58.471 ============================== 00:26:58.471 Admin Commands 00:26:58.471 -------------- 00:26:58.471 Get Log Page (02h): Supported 00:26:58.471 Identify (06h): Supported 00:26:58.471 Abort (08h): Supported 00:26:58.471 Set Features (09h): Supported 00:26:58.471 Get Features (0Ah): Supported 00:26:58.471 Asynchronous Event Request (0Ch): Supported 00:26:58.471 Keep Alive (18h): Supported 00:26:58.471 I/O Commands 00:26:58.471 ------------ 00:26:58.471 Flush (00h): Supported 00:26:58.471 Write (01h): Supported LBA-Change 00:26:58.471 Read (02h): Supported 00:26:58.471 Write Zeroes (08h): Supported LBA-Change 00:26:58.471 Dataset Management (09h): Supported 00:26:58.471 00:26:58.471 Error Log 00:26:58.471 ========= 00:26:58.471 Entry: 0 00:26:58.471 Error Count: 0x3 00:26:58.471 Submission Queue Id: 0x0 00:26:58.471 Command Id: 0x5 00:26:58.471 Phase Bit: 0 00:26:58.471 Status Code: 0x2 00:26:58.471 Status Code Type: 0x0 00:26:58.471 Do Not Retry: 1 00:26:58.471 Error Location: 0x28 00:26:58.471 LBA: 0x0 00:26:58.471 Namespace: 0x0 00:26:58.471 Vendor Log Page: 0x0 00:26:58.471 ----------- 00:26:58.471 Entry: 1 00:26:58.471 Error Count: 0x2 00:26:58.471 Submission Queue Id: 0x0 00:26:58.471 Command Id: 0x5 00:26:58.471 Phase Bit: 0 00:26:58.471 Status Code: 0x2 00:26:58.471 Status Code Type: 0x0 00:26:58.471 Do Not Retry: 1 00:26:58.471 Error Location: 0x28 00:26:58.471 LBA: 0x0 00:26:58.471 Namespace: 0x0 00:26:58.471 Vendor Log Page: 0x0 00:26:58.471 ----------- 00:26:58.471 Entry: 2 00:26:58.471 Error Count: 0x1 00:26:58.471 Submission Queue Id: 0x0 00:26:58.471 Command Id: 0x4 00:26:58.471 Phase Bit: 0 00:26:58.471 Status Code: 0x2 00:26:58.471 Status Code Type: 0x0 00:26:58.471 Do Not Retry: 1 00:26:58.471 Error Location: 0x28 00:26:58.471 LBA: 0x0 00:26:58.471 Namespace: 0x0 00:26:58.471 Vendor Log Page: 0x0 00:26:58.471 00:26:58.471 Number of Queues 00:26:58.471 ================ 00:26:58.471 Number of I/O Submission Queues: 128 00:26:58.471 Number of I/O Completion Queues: 128 00:26:58.471 00:26:58.471 ZNS Specific Controller Data 00:26:58.471 ============================ 00:26:58.471 Zone Append Size Limit: 0 00:26:58.471 00:26:58.471 00:26:58.471 Active Namespaces 00:26:58.471 ================= 00:26:58.471 get_feature(0x05) failed 00:26:58.471 Namespace ID:1 00:26:58.471 Command Set Identifier: NVM (00h) 00:26:58.471 Deallocate: Supported 00:26:58.471 Deallocated/Unwritten Error: Not Supported 00:26:58.471 Deallocated Read Value: Unknown 00:26:58.471 Deallocate in Write Zeroes: Not Supported 00:26:58.471 Deallocated Guard Field: 0xFFFF 00:26:58.471 Flush: Supported 00:26:58.471 Reservation: Not Supported 00:26:58.471 Namespace Sharing Capabilities: Multiple Controllers 00:26:58.471 Size (in LBAs): 3750748848 (1788GiB) 00:26:58.472 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:58.472 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:58.472 UUID: 5165e675-9982-47f5-b6b0-cbfbf19e9904 00:26:58.472 Thin Provisioning: Not Supported 00:26:58.472 Per-NS Atomic Units: Yes 00:26:58.472 Atomic Write Unit (Normal): 8 00:26:58.472 Atomic Write Unit (PFail): 8 00:26:58.472 Preferred Write Granularity: 8 00:26:58.472 Atomic Compare & Write Unit: 8 00:26:58.472 Atomic Boundary Size (Normal): 0 00:26:58.472 Atomic Boundary Size (PFail): 0 00:26:58.472 Atomic Boundary Offset: 0 00:26:58.472 NGUID/EUI64 Never Reused: No 00:26:58.472 ANA group ID: 1 00:26:58.472 Namespace Write Protected: No 00:26:58.472 Number of LBA Formats: 1 00:26:58.472 Current LBA Format: LBA Format #00 00:26:58.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:58.472 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.472 rmmod nvme_tcp 00:26:58.472 rmmod nvme_fabrics 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.472 21:03:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:01.017 21:03:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.371 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.371 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:04.631 00:27:04.631 real 0m18.320s 00:27:04.631 user 0m4.832s 00:27:04.631 sys 0m10.421s 00:27:04.631 21:03:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.631 21:03:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.631 ************************************ 00:27:04.631 END TEST nvmf_identify_kernel_target 00:27:04.631 ************************************ 00:27:04.631 21:03:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:04.631 21:03:08 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.631 21:03:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:04.631 21:03:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.631 21:03:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.631 ************************************ 00:27:04.631 START TEST nvmf_auth_host 00:27:04.631 ************************************ 00:27:04.631 21:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.893 * Looking for test storage... 00:27:04.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.893 21:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:11.482 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:11.482 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:11.482 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:11.482 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:11.483 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.483 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:27:11.744 00:27:11.744 --- 10.0.0.2 ping statistics --- 00:27:11.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.744 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:27:11.744 00:27:11.744 --- 10.0.0.1 ping statistics --- 00:27:11.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.744 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.744 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1737239 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1737239 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1737239 ']' 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.005 21:03:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.575 21:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.575 21:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:12.575 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.575 21:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:12.575 21:03:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=77fbe1b2d333a13e4d3cd732e37235f9 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Xz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 77fbe1b2d333a13e4d3cd732e37235f9 0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 77fbe1b2d333a13e4d3cd732e37235f9 0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=77fbe1b2d333a13e4d3cd732e37235f9 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Xz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Xz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5Xz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ead15979dd99948411818148a8fc39d5fcf57892439df20a3814d4fe421ac233 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cnF 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ead15979dd99948411818148a8fc39d5fcf57892439df20a3814d4fe421ac233 3 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ead15979dd99948411818148a8fc39d5fcf57892439df20a3814d4fe421ac233 3 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ead15979dd99948411818148a8fc39d5fcf57892439df20a3814d4fe421ac233 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cnF 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cnF 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cnF 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=42d44fb06747a92b5498cb4c0f00597dbe3f2d7b65d74ab4 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fVz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 42d44fb06747a92b5498cb4c0f00597dbe3f2d7b65d74ab4 0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 42d44fb06747a92b5498cb4c0f00597dbe3f2d7b65d74ab4 0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=42d44fb06747a92b5498cb4c0f00597dbe3f2d7b65d74ab4 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fVz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fVz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fVz 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=560e03363c72db480cc32f5627ef3425b88cbadae5f5c34b 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CS8 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 560e03363c72db480cc32f5627ef3425b88cbadae5f5c34b 2 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 560e03363c72db480cc32f5627ef3425b88cbadae5f5c34b 2 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=560e03363c72db480cc32f5627ef3425b88cbadae5f5c34b 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:12.836 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CS8 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CS8 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CS8 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.097 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7824d2b6d2272f28d2b3a8ea020dfbc0 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qso 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7824d2b6d2272f28d2b3a8ea020dfbc0 1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7824d2b6d2272f28d2b3a8ea020dfbc0 1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7824d2b6d2272f28d2b3a8ea020dfbc0 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qso 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qso 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Qso 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8fc80f68a923ddf755edd19fcc0593f3 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YKm 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8fc80f68a923ddf755edd19fcc0593f3 1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8fc80f68a923ddf755edd19fcc0593f3 1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8fc80f68a923ddf755edd19fcc0593f3 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YKm 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YKm 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YKm 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=686394e48ac21b20bbd8a541d1b96c21462b78372dcfe44a 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1hP 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 686394e48ac21b20bbd8a541d1b96c21462b78372dcfe44a 2 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 686394e48ac21b20bbd8a541d1b96c21462b78372dcfe44a 2 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=686394e48ac21b20bbd8a541d1b96c21462b78372dcfe44a 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1hP 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1hP 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1hP 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2109b95c149c8cea5484a422f116896b 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OnN 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2109b95c149c8cea5484a422f116896b 0 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2109b95c149c8cea5484a422f116896b 0 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2109b95c149c8cea5484a422f116896b 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:13.098 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.359 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OnN 00:27:13.359 21:03:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OnN 00:27:13.359 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OnN 00:27:13.359 21:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4a13797b87781b37c929d8ceae0255b8f88c337dd80b29167d84a99f7eddfa5 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lKK 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4a13797b87781b37c929d8ceae0255b8f88c337dd80b29167d84a99f7eddfa5 3 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4a13797b87781b37c929d8ceae0255b8f88c337dd80b29167d84a99f7eddfa5 3 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4a13797b87781b37c929d8ceae0255b8f88c337dd80b29167d84a99f7eddfa5 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lKK 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lKK 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lKK 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1737239 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1737239 ']' 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Xz 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cnF ]] 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cnF 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.359 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fVz 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CS8 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CS8 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Qso 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YKm ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YKm 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1hP 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OnN ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OnN 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lKK 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:13.621 21:03:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:16.921 Waiting for block devices as requested 00:27:16.921 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:16.921 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:17.181 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:17.181 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:17.441 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:17.441 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:17.441 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:17.441 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:17.702 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:17.702 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:18.644 No valid GPT data, bailing 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:18.644 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:18.644 00:27:18.644 Discovery Log Number of Records 2, Generation counter 2 00:27:18.644 =====Discovery Log Entry 0====== 00:27:18.644 trtype: tcp 00:27:18.644 adrfam: ipv4 00:27:18.644 subtype: current discovery subsystem 00:27:18.644 treq: not specified, sq flow control disable supported 00:27:18.644 portid: 1 00:27:18.644 trsvcid: 4420 00:27:18.644 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:18.645 traddr: 10.0.0.1 00:27:18.645 eflags: none 00:27:18.645 sectype: none 00:27:18.645 =====Discovery Log Entry 1====== 00:27:18.645 trtype: tcp 00:27:18.645 adrfam: ipv4 00:27:18.645 subtype: nvme subsystem 00:27:18.645 treq: not specified, sq flow control disable supported 00:27:18.645 portid: 1 00:27:18.645 trsvcid: 4420 00:27:18.645 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:18.645 traddr: 10.0.0.1 00:27:18.645 eflags: none 00:27:18.645 sectype: none 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.645 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.906 nvme0n1 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.906 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.907 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.167 nvme0n1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.167 21:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.427 nvme0n1 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.427 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 nvme0n1 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 nvme0n1 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.687 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.947 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.948 nvme0n1 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.948 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.208 21:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.208 nvme0n1 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.208 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.468 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.469 nvme0n1 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.469 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.728 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.729 nvme0n1 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.729 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.148 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 nvme0n1 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 21:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.408 nvme0n1 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.409 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.668 nvme0n1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.668 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.238 nvme0n1 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.238 21:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.499 nvme0n1 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.499 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.760 nvme0n1 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.760 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 nvme0n1 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.022 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.283 21:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.544 nvme0n1 00:27:23.544 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.544 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.544 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.544 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.544 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.805 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.806 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.378 nvme0n1 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.378 21:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.378 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.379 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.639 nvme0n1 00:27:24.639 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.639 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.639 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.639 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.970 21:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.255 nvme0n1 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.255 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.516 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.777 nvme0n1 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.777 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.038 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.039 21:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.623 nvme0n1 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.623 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.883 21:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.456 nvme0n1 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.456 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.717 21:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.289 nvme0n1 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.289 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.550 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.122 nvme0n1 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.122 21:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.383 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.955 nvme0n1 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.955 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.215 21:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 nvme0n1 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.216 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.476 nvme0n1 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.476 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.477 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.737 nvme0n1 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.737 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.738 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.998 nvme0n1 00:27:30.998 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.998 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.998 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.998 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.999 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.260 nvme0n1 00:27:31.260 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.260 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.260 21:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.260 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.260 21:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.260 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.261 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 nvme0n1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.522 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.783 nvme0n1 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.783 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.044 nvme0n1 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.044 21:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.305 nvme0n1 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.305 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.567 nvme0n1 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.567 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.828 nvme0n1 00:27:32.828 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.090 21:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.352 nvme0n1 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.352 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.613 nvme0n1 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.613 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.614 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.874 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 nvme0n1 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.135 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.136 21:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.397 nvme0n1 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.397 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 nvme0n1 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.970 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.971 21:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.543 nvme0n1 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.543 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.544 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.117 nvme0n1 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.117 21:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 nvme0n1 00:27:36.689 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.689 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.690 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.262 nvme0n1 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.262 21:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.833 nvme0n1 00:27:37.833 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.833 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.833 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.833 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.833 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.094 21:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.665 nvme0n1 00:27:38.665 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.665 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.665 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.665 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.665 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.926 21:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.497 nvme0n1 00:27:39.497 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.497 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.497 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.497 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.497 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.757 21:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.328 nvme0n1 00:27:40.328 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.588 21:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.528 nvme0n1 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.528 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.529 nvme0n1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.529 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.802 nvme0n1 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.802 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 nvme0n1 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 nvme0n1 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.132 21:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.393 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.394 nvme0n1 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.394 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.654 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.655 nvme0n1 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.655 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.915 nvme0n1 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.915 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.175 21:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.175 nvme0n1 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.175 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 nvme0n1 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.436 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.696 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.955 nvme0n1 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.955 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.215 nvme0n1 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.215 21:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.215 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.216 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.501 nvme0n1 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.501 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.502 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.762 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.022 nvme0n1 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.022 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.023 21:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.282 nvme0n1 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.282 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.283 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.542 nvme0n1 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.542 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.802 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.063 nvme0n1 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.063 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.323 21:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.323 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 nvme0n1 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.895 21:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.155 nvme0n1 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.155 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.984 nvme0n1 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.984 21:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.245 nvme0n1 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.245 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdmYmUxYjJkMzMzYTEzZTRkM2NkNzMyZTM3MjM1ZjkSwc8l: 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWFkMTU5NzlkZDk5OTQ4NDExODE4MTQ4YThmYzM5ZDVmY2Y1Nzg5MjQzOWRmMjBhMzgxNGQ0ZmU0MjFhYzIzM47SqMo=: 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.505 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.075 nvme0n1 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.075 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:49.335 21:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.335 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.336 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.906 nvme0n1 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.906 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzgyNGQyYjZkMjI3MmYyOGQyYjNhOGVhMDIwZGZiYzA4QYEv: 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGZjODBmNjhhOTIzZGRmNzU1ZWRkMTlmY2MwNTkzZjMpaMX+: 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.167 21:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.737 nvme0n1 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.737 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.013 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njg2Mzk0ZTQ4YWMyMWIyMGJiZDhhNTQxZDFiOTZjMjE0NjJiNzgzNzJkY2ZlNDRhgiA+QQ==: 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: ]] 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjEwOWI5NWMxNDljOGNlYTU0ODRhNDIyZjExNjg5NmLZVAL5: 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.014 21:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.592 nvme0n1 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.592 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzRhMTM3OTdiODc3ODFiMzdjOTI5ZDhjZWFlMDI1NWI4Zjg4YzMzN2RkODBiMjkxNjdkODRhOTlmN2VkZGZhNbuS88A=: 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.853 21:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.424 nvme0n1 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.424 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.685 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJkNDRmYjA2NzQ3YTkyYjU0OThjYjRjMGYwMDU5N2RiZTNmMmQ3YjY1ZDc0YWI04AhzZg==: 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwZTAzMzYzYzcyZGI0ODBjYzMyZjU2MjdlZjM0MjViODhjYmFkYWU1ZjVjMzRiIRL0Eg==: 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.686 request: 00:27:52.686 { 00:27:52.686 "name": "nvme0", 00:27:52.686 "trtype": "tcp", 00:27:52.686 "traddr": "10.0.0.1", 00:27:52.686 "adrfam": "ipv4", 00:27:52.686 "trsvcid": "4420", 00:27:52.686 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.686 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.686 "prchk_reftag": false, 00:27:52.686 "prchk_guard": false, 00:27:52.686 "hdgst": false, 00:27:52.686 "ddgst": false, 00:27:52.686 "method": "bdev_nvme_attach_controller", 00:27:52.686 "req_id": 1 00:27:52.686 } 00:27:52.686 Got JSON-RPC error response 00:27:52.686 response: 00:27:52.686 { 00:27:52.686 "code": -5, 00:27:52.686 "message": "Input/output error" 00:27:52.686 } 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.686 request: 00:27:52.686 { 00:27:52.686 "name": "nvme0", 00:27:52.686 "trtype": "tcp", 00:27:52.686 "traddr": "10.0.0.1", 00:27:52.686 "adrfam": "ipv4", 00:27:52.686 "trsvcid": "4420", 00:27:52.686 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.686 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.686 "prchk_reftag": false, 00:27:52.686 "prchk_guard": false, 00:27:52.686 "hdgst": false, 00:27:52.686 "ddgst": false, 00:27:52.686 "dhchap_key": "key2", 00:27:52.686 "method": "bdev_nvme_attach_controller", 00:27:52.686 "req_id": 1 00:27:52.686 } 00:27:52.686 Got JSON-RPC error response 00:27:52.686 response: 00:27:52.686 { 00:27:52.686 "code": -5, 00:27:52.686 "message": "Input/output error" 00:27:52.686 } 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.686 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 request: 00:27:52.947 { 00:27:52.947 "name": "nvme0", 00:27:52.947 "trtype": "tcp", 00:27:52.947 "traddr": "10.0.0.1", 00:27:52.947 "adrfam": "ipv4", 00:27:52.947 "trsvcid": "4420", 00:27:52.947 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.947 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.947 "prchk_reftag": false, 00:27:52.947 "prchk_guard": false, 00:27:52.947 "hdgst": false, 00:27:52.947 "ddgst": false, 00:27:52.947 "dhchap_key": "key1", 00:27:52.947 "dhchap_ctrlr_key": "ckey2", 00:27:52.947 "method": "bdev_nvme_attach_controller", 00:27:52.947 "req_id": 1 00:27:52.947 } 00:27:52.947 Got JSON-RPC error response 00:27:52.947 response: 00:27:52.947 { 00:27:52.947 "code": -5, 00:27:52.947 "message": "Input/output error" 00:27:52.947 } 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.947 rmmod nvme_tcp 00:27:52.947 rmmod nvme_fabrics 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1737239 ']' 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1737239 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1737239 ']' 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1737239 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737239 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737239' 00:27:52.947 killing process with pid 1737239 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1737239 00:27:52.947 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1737239 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.208 21:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:55.142 21:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:55.142 21:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:55.142 21:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:55.142 21:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:55.142 21:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:55.403 21:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:58.781 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:58.781 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:59.041 21:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5Xz /tmp/spdk.key-null.fVz /tmp/spdk.key-sha256.Qso /tmp/spdk.key-sha384.1hP /tmp/spdk.key-sha512.lKK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:59.041 21:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.358 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:02.358 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:02.358 00:28:02.358 real 0m57.658s 00:28:02.358 user 0m51.537s 00:28:02.358 sys 0m14.377s 00:28:02.358 21:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.358 21:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.358 ************************************ 00:28:02.358 END TEST nvmf_auth_host 00:28:02.358 ************************************ 00:28:02.358 21:04:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:02.358 21:04:06 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:02.358 21:04:06 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.358 21:04:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.358 21:04:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.358 21:04:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.358 ************************************ 00:28:02.358 START TEST nvmf_digest 00:28:02.358 ************************************ 00:28:02.358 21:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.619 * Looking for test storage... 00:28:02.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.619 21:04:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:09.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:09.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:09.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:09.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.233 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.495 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:28:09.756 00:28:09.756 --- 10.0.0.2 ping statistics --- 00:28:09.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.756 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:28:09.756 00:28:09.756 --- 10.0.0.1 ping statistics --- 00:28:09.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.756 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.756 ************************************ 00:28:09.756 START TEST nvmf_digest_clean 00:28:09.756 ************************************ 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1753599 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1753599 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1753599 ']' 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.756 21:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.756 [2024-07-15 21:04:13.545631] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:09.756 [2024-07-15 21:04:13.545681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.756 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.756 [2024-07-15 21:04:13.614310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.017 [2024-07-15 21:04:13.685857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.017 [2024-07-15 21:04:13.685893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.017 [2024-07-15 21:04:13.685903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.017 [2024-07-15 21:04:13.685910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.017 [2024-07-15 21:04:13.685917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.017 [2024-07-15 21:04:13.685938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.588 null0 00:28:10.588 [2024-07-15 21:04:14.436611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.588 [2024-07-15 21:04:14.460846] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1753945 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1753945 /var/tmp/bperf.sock 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1753945 ']' 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.588 21:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.849 [2024-07-15 21:04:14.516887] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:10.849 [2024-07-15 21:04:14.516936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753945 ] 00:28:10.849 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.849 [2024-07-15 21:04:14.592735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.849 [2024-07-15 21:04:14.656663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.421 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.421 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:11.421 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:11.421 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:11.421 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.682 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.682 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.255 nvme0n1 00:28:12.255 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.255 21:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.255 Running I/O for 2 seconds... 00:28:14.167 00:28:14.167 Latency(us) 00:28:14.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:14.167 nvme0n1 : 2.00 20832.79 81.38 0.00 0.00 6135.71 2717.01 13544.11 00:28:14.167 =================================================================================================================== 00:28:14.167 Total : 20832.79 81.38 0.00 0.00 6135.71 2717.01 13544.11 00:28:14.167 0 00:28:14.167 21:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:14.167 21:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:14.167 21:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:14.167 21:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:14.167 | select(.opcode=="crc32c") 00:28:14.167 | "\(.module_name) \(.executed)"' 00:28:14.167 21:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1753945 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1753945 ']' 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1753945 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1753945 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1753945' 00:28:14.428 killing process with pid 1753945 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1753945 00:28:14.428 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.428 00:28:14.428 Latency(us) 00:28:14.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.428 =================================================================================================================== 00:28:14.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.428 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1753945 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1754634 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1754634 /var/tmp/bperf.sock 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1754634 ']' 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.689 21:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.689 [2024-07-15 21:04:18.371043] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:14.689 [2024-07-15 21:04:18.371098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754634 ] 00:28:14.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.689 Zero copy mechanism will not be used. 00:28:14.689 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.689 [2024-07-15 21:04:18.444911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.689 [2024-07-15 21:04:18.497874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.262 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.262 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:15.262 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:15.262 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:15.262 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:15.522 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.522 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.783 nvme0n1 00:28:15.783 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:15.783 21:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:15.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.783 Zero copy mechanism will not be used. 00:28:15.783 Running I/O for 2 seconds... 00:28:18.329 00:28:18.329 Latency(us) 00:28:18.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:18.329 nvme0n1 : 2.00 2149.87 268.73 0.00 0.00 7441.37 3932.16 13871.79 00:28:18.329 =================================================================================================================== 00:28:18.329 Total : 2149.87 268.73 0.00 0.00 7441.37 3932.16 13871.79 00:28:18.329 0 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:18.329 | select(.opcode=="crc32c") 00:28:18.329 | "\(.module_name) \(.executed)"' 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1754634 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1754634 ']' 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1754634 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754634 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754634' 00:28:18.329 killing process with pid 1754634 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1754634 00:28:18.329 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.329 00:28:18.329 Latency(us) 00:28:18.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.329 =================================================================================================================== 00:28:18.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.329 21:04:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1754634 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1755325 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1755325 /var/tmp/bperf.sock 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1755325 ']' 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.329 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.329 [2024-07-15 21:04:22.080262] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:18.329 [2024-07-15 21:04:22.080316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755325 ] 00:28:18.329 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.329 [2024-07-15 21:04:22.156295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.329 [2024-07-15 21:04:22.208494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.301 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.301 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:19.301 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:19.301 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:19.301 21:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:19.301 21:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.301 21:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.571 nvme0n1 00:28:19.571 21:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:19.571 21:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.830 Running I/O for 2 seconds... 00:28:21.743 00:28:21.743 Latency(us) 00:28:21.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.743 nvme0n1 : 2.01 21267.97 83.08 0.00 0.00 6006.79 5570.56 14964.05 00:28:21.744 =================================================================================================================== 00:28:21.744 Total : 21267.97 83.08 0.00 0.00 6006.79 5570.56 14964.05 00:28:21.744 0 00:28:21.744 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:21.744 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:21.744 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:21.744 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:21.744 | select(.opcode=="crc32c") 00:28:21.744 | "\(.module_name) \(.executed)"' 00:28:21.744 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1755325 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1755325 ']' 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1755325 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1755325 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1755325' 00:28:22.005 killing process with pid 1755325 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1755325 00:28:22.005 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.005 00:28:22.005 Latency(us) 00:28:22.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.005 =================================================================================================================== 00:28:22.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1755325 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1756043 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1756043 /var/tmp/bperf.sock 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1756043 ']' 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.005 21:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.265 [2024-07-15 21:04:25.943549] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:22.265 [2024-07-15 21:04:25.943604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756043 ] 00:28:22.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.266 Zero copy mechanism will not be used. 00:28:22.266 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.266 [2024-07-15 21:04:26.018583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.266 [2024-07-15 21:04:26.072133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.835 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.835 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:22.835 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:22.836 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:22.836 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.096 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.096 21:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.356 nvme0n1 00:28:23.356 21:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.356 21:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.616 Zero copy mechanism will not be used. 00:28:23.616 Running I/O for 2 seconds... 00:28:25.526 00:28:25.526 Latency(us) 00:28:25.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:25.526 nvme0n1 : 2.00 2985.14 373.14 0.00 0.00 5351.18 3822.93 21626.88 00:28:25.526 =================================================================================================================== 00:28:25.526 Total : 2985.14 373.14 0.00 0.00 5351.18 3822.93 21626.88 00:28:25.526 0 00:28:25.526 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:25.526 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:25.526 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:25.526 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:25.526 | select(.opcode=="crc32c") 00:28:25.526 | "\(.module_name) \(.executed)"' 00:28:25.526 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1756043 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1756043 ']' 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1756043 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756043 00:28:25.786 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756043' 00:28:25.787 killing process with pid 1756043 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1756043 00:28:25.787 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.787 00:28:25.787 Latency(us) 00:28:25.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.787 =================================================================================================================== 00:28:25.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1756043 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1753599 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1753599 ']' 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1753599 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.787 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1753599 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1753599' 00:28:26.047 killing process with pid 1753599 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1753599 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1753599 00:28:26.047 00:28:26.047 real 0m16.344s 00:28:26.047 user 0m32.167s 00:28:26.047 sys 0m3.169s 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.047 ************************************ 00:28:26.047 END TEST nvmf_digest_clean 00:28:26.047 ************************************ 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.047 ************************************ 00:28:26.047 START TEST nvmf_digest_error 00:28:26.047 ************************************ 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1756992 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1756992 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1756992 ']' 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.047 21:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.308 [2024-07-15 21:04:29.966250] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:26.308 [2024-07-15 21:04:29.966298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.308 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.308 [2024-07-15 21:04:30.033077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.308 [2024-07-15 21:04:30.101320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.308 [2024-07-15 21:04:30.101359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.308 [2024-07-15 21:04:30.101368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.308 [2024-07-15 21:04:30.101376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.308 [2024-07-15 21:04:30.101383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.308 [2024-07-15 21:04:30.101403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:26.878 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.879 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.879 [2024-07-15 21:04:30.771325] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.139 null0 00:28:27.139 [2024-07-15 21:04:30.851879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.139 [2024-07-15 21:04:30.876078] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1757075 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1757075 /var/tmp/bperf.sock 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1757075 ']' 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.139 21:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.139 [2024-07-15 21:04:30.929413] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:27.139 [2024-07-15 21:04:30.929458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757075 ] 00:28:27.139 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.139 [2024-07-15 21:04:31.006170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.400 [2024-07-15 21:04:31.060143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.971 21:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.542 nvme0n1 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:28.542 21:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.542 Running I/O for 2 seconds... 00:28:28.542 [2024-07-15 21:04:32.394356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.542 [2024-07-15 21:04:32.394383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-07-15 21:04:32.394396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.542 [2024-07-15 21:04:32.410226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.542 [2024-07-15 21:04:32.410246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-07-15 21:04:32.410253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.542 [2024-07-15 21:04:32.423485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.542 [2024-07-15 21:04:32.423502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-07-15 21:04:32.423509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.435953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.435970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.435977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.447410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.447427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.447433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.460406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.460423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.460430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.472598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.472615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.472622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.485907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.485923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.485930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.497483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.497499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.497505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.509536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.509557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.509564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.521914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.521931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.521937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.534331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.534348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.534354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.546041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.546058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.546064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.558496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.558513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.558519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.570638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.570655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.570661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.582638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.582654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.582660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.804 [2024-07-15 21:04:32.595220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.804 [2024-07-15 21:04:32.595236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-07-15 21:04:32.595242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.607264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.607281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.607287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.619618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.619635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.619641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.632437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.632454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.632460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.645712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.645728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.645734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.656873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.656896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.668856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.668873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.681087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.681104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.681110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.805 [2024-07-15 21:04:32.694018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:28.805 [2024-07-15 21:04:32.694035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-07-15 21:04:32.694041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.705304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.705321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.705327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.718980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.719000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.719006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.731046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.731063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.731069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.742778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.742795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.742803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.754673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.754690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.754696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.767045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.767062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.767068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.779911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.779928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.779934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.791830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.791853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.804226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.804243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.804249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.816615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.816632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.816639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.828694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.828711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.828718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.840657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.065 [2024-07-15 21:04:32.840675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.065 [2024-07-15 21:04:32.840681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.065 [2024-07-15 21:04:32.853037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.853054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.853060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.864867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.864884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.864890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.876768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.876784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.876791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.890315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.890332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.890338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.903142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.903165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.914953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.914975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.929065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.929081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.929090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.941010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.941027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.941033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.066 [2024-07-15 21:04:32.951906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.066 [2024-07-15 21:04:32.951922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.066 [2024-07-15 21:04:32.951928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.326 [2024-07-15 21:04:32.964204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.326 [2024-07-15 21:04:32.964221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.326 [2024-07-15 21:04:32.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.326 [2024-07-15 21:04:32.977080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.326 [2024-07-15 21:04:32.977097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.326 [2024-07-15 21:04:32.977104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.326 [2024-07-15 21:04:32.989457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.326 [2024-07-15 21:04:32.989473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.326 [2024-07-15 21:04:32.989479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.001523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.001539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.001545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.014163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.014180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.014186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.025913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.025929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.025935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.039443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.039470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.051715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.051732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.051738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.064902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.064918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.064924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.076003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.076020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.089474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.089490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.089496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.101433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.101448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.101455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.112886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.112903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.112909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.125971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.125987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.125994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.139607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.139623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.139629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.151718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.151734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.151740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.163856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.163873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.163879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.176323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.176339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.176345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.188298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.188314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.200163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.200179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.327 [2024-07-15 21:04:33.212222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.327 [2024-07-15 21:04:33.212238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.327 [2024-07-15 21:04:33.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.224214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.224231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.236717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.236734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.236740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.249506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.249523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.249533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.262507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.262524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.262530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.273989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.274005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.274011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.286447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.286463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.286469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.299561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.299583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.310538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.310554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.310560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.322843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.322859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.322865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.335530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.335547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.335553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.347391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.347408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.359540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.359556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.359563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.372532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.372548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.372554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.384074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.384096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.396148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.396165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.396171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.408268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.408285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.420574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.420590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.420597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.432641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.432658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.432664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.444757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.444773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.444779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.457809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.457826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.588 [2024-07-15 21:04:33.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.588 [2024-07-15 21:04:33.469428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.588 [2024-07-15 21:04:33.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.589 [2024-07-15 21:04:33.469450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.481776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.481792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.481798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.494785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.494802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.494808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.507557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.507573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.507579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.519247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.519269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.531294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.531310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.531316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.543119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.543138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.556333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.556349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.556355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.567478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.567500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.567506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.580317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.580333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.580339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.592700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.592717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.592723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.605325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.605341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.605348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.616998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.617015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.849 [2024-07-15 21:04:33.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.849 [2024-07-15 21:04:33.629735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.849 [2024-07-15 21:04:33.629752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.629758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.640994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.641010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.641017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.653705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.653721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.653727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.666591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.666607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.666613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.677798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.677814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.677820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.691480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.691497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.691503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.702494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.702511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.702517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.714978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.714994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.715000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.728583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.728599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.728606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.850 [2024-07-15 21:04:33.739354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:29.850 [2024-07-15 21:04:33.739371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.850 [2024-07-15 21:04:33.739377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.753202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.753218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.753224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.765507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.765523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.765529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.776444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.776460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.776469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.788766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.788782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.788788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.801003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.801020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.801026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.813826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.813848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.826029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.826045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.826051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.838951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.838967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.838973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.851562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.851578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.851584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.863591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.863607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.863613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.875250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.875266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.875272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.887509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.887524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.887530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.899503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.899519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.899525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.911470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.911486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.911492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.924320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.924336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.924342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.936115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.936134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.936141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.947924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.947940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.947946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.961167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.961183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.961189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.972429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.972445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.972451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.984517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.984533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.984542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.111 [2024-07-15 21:04:33.997332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.111 [2024-07-15 21:04:33.997348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.111 [2024-07-15 21:04:33.997354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.009612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.009628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.009634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.022984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.023000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.023006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.035239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.035256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.035262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.047781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.047798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.047804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.059435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.059451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.059457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.070749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.070766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.070772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.083377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.083393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.083399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.096479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.096499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.096505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.108697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.108713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.108719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.120772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.120788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.120794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.132912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.372 [2024-07-15 21:04:34.132929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.372 [2024-07-15 21:04:34.132935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.372 [2024-07-15 21:04:34.144174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.144190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.157364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.157386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.169129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.169145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.169152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.182418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.182434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.182440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.194496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.194518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.207050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.207067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.218815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.218831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.218837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.230942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.230959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.230966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.244279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.244295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.244301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.373 [2024-07-15 21:04:34.256176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.373 [2024-07-15 21:04:34.256192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.373 [2024-07-15 21:04:34.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.633 [2024-07-15 21:04:34.267505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.633 [2024-07-15 21:04:34.267521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.633 [2024-07-15 21:04:34.267527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.633 [2024-07-15 21:04:34.279374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.633 [2024-07-15 21:04:34.279390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.633 [2024-07-15 21:04:34.279396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.633 [2024-07-15 21:04:34.292110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.633 [2024-07-15 21:04:34.292130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.633 [2024-07-15 21:04:34.292137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.633 [2024-07-15 21:04:34.304978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.633 [2024-07-15 21:04:34.304994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.305003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.316322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.316338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.316344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.329023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.329040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.329046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.341286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.341303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.341309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.354217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.354233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.366692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.366708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.366715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 [2024-07-15 21:04:34.378476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f178e0) 00:28:30.634 [2024-07-15 21:04:34.378492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.634 [2024-07-15 21:04:34.378498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.634 00:28:30.634 Latency(us) 00:28:30.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:30.634 nvme0n1 : 2.00 20687.83 80.81 0.00 0.00 6180.20 3932.16 15400.96 00:28:30.634 =================================================================================================================== 00:28:30.634 Total : 20687.83 80.81 0.00 0.00 6180.20 3932.16 15400.96 00:28:30.634 0 00:28:30.634 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:30.634 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:30.634 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:30.634 | .driver_specific 00:28:30.634 | .nvme_error 00:28:30.634 | .status_code 00:28:30.634 | .command_transient_transport_error' 00:28:30.634 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1757075 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1757075 ']' 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1757075 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757075 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757075' 00:28:30.895 killing process with pid 1757075 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1757075 00:28:30.895 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.895 00:28:30.895 Latency(us) 00:28:30.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.895 =================================================================================================================== 00:28:30.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1757075 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1757838 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1757838 /var/tmp/bperf.sock 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1757838 ']' 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.895 21:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.895 [2024-07-15 21:04:34.784400] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:30.895 [2024-07-15 21:04:34.784454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757838 ] 00:28:30.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.895 Zero copy mechanism will not be used. 00:28:31.155 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.155 [2024-07-15 21:04:34.860346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.155 [2024-07-15 21:04:34.913731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.726 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.726 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:31.726 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.726 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.986 21:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.246 nvme0n1 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:32.246 21:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.507 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.507 Zero copy mechanism will not be used. 00:28:32.507 Running I/O for 2 seconds... 00:28:32.507 [2024-07-15 21:04:36.206104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.206147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.221921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.221942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.236932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.236950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.236957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.255913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.255931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.255938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.271031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.271049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.271056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.286841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.286859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.286866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.302424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.302442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.302448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.318128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.318147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.318153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.332382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.332400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.332406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.347868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.347892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.363400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.363418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.363424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.377724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.377742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.507 [2024-07-15 21:04:36.394015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.507 [2024-07-15 21:04:36.394033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.507 [2024-07-15 21:04:36.394039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.408892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.408917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.424579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.424597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.424603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.439075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.439092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.439098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.454017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.454035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.454041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.468881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.468899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.468905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.483945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.483963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.500115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.500139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.500146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.516114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.516137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.516143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.529167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.529185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.529191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.545299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.545316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.545322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.560072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.560089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.560096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.575757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.575774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.575780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.590479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.590497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.606137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.606155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.621726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.621743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.621749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.637313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.637329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.769 [2024-07-15 21:04:36.637339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.769 [2024-07-15 21:04:36.651105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:32.769 [2024-07-15 21:04:36.651127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.770 [2024-07-15 21:04:36.651133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.666445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.666463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.666469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.679333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.679350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.679356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.691669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.691686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.691693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.707800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.707820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.724416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.724435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.724442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.740508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.740527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.740533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.755044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.770374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.770395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.770402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.786069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.786087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.786093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.799813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.799831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.799837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.809916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.809933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.809939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.825761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.825784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.840990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.841007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.856404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.856421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.856427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.032 [2024-07-15 21:04:36.871767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.032 [2024-07-15 21:04:36.871785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.032 [2024-07-15 21:04:36.871791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.033 [2024-07-15 21:04:36.887832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.033 [2024-07-15 21:04:36.887849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.033 [2024-07-15 21:04:36.887855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.033 [2024-07-15 21:04:36.901397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.033 [2024-07-15 21:04:36.901415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.033 [2024-07-15 21:04:36.901421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.033 [2024-07-15 21:04:36.917465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.033 [2024-07-15 21:04:36.917483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.033 [2024-07-15 21:04:36.917489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.294 [2024-07-15 21:04:36.932589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.294 [2024-07-15 21:04:36.932607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.294 [2024-07-15 21:04:36.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.294 [2024-07-15 21:04:36.948285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.294 [2024-07-15 21:04:36.948302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.294 [2024-07-15 21:04:36.948309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.294 [2024-07-15 21:04:36.963763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.294 [2024-07-15 21:04:36.963780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.294 [2024-07-15 21:04:36.963786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.294 [2024-07-15 21:04:36.980181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.294 [2024-07-15 21:04:36.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.294 [2024-07-15 21:04:36.980205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.294 [2024-07-15 21:04:36.997645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.294 [2024-07-15 21:04:36.997662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:36.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.016639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.016657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.016663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.028442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.028460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.028469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.041654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.041671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.041678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.056616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.056634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.056640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.073396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.073420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.088303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.088322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.088328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.105031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.105049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.105055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.119955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.119973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.119980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.133366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.133385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.133391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.148927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.148945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.148951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.164773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.164791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.164797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.295 [2024-07-15 21:04:37.181020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.295 [2024-07-15 21:04:37.181038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.295 [2024-07-15 21:04:37.181044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.556 [2024-07-15 21:04:37.195110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.556 [2024-07-15 21:04:37.195133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.556 [2024-07-15 21:04:37.195139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.556 [2024-07-15 21:04:37.207695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.556 [2024-07-15 21:04:37.207713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.556 [2024-07-15 21:04:37.207719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.556 [2024-07-15 21:04:37.221604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.556 [2024-07-15 21:04:37.221622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.556 [2024-07-15 21:04:37.221628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.556 [2024-07-15 21:04:37.234944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.556 [2024-07-15 21:04:37.234962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.556 [2024-07-15 21:04:37.234968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.556 [2024-07-15 21:04:37.251196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.251214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.251220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.266570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.266588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.284111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.284133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.284142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.299328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.299346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.299352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.312453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.312471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.312477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.325704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.325721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.325727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.337745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.337763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.337769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.352785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.352803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.352809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.368182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.368200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.368206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.384375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.384393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.399323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.414456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.414476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.414482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.429020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.429038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.429044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.557 [2024-07-15 21:04:37.444682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.557 [2024-07-15 21:04:37.444700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.557 [2024-07-15 21:04:37.444706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.456738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.456756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.456762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.473605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.473622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.473628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.489410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.489427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.489433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.505156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.505174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.505180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.521335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.521353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.521359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.537568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.537586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.537592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.554533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.554551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.569962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.569979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.569986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.585092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.585111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.600243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.600262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.600268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.615522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.615541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.615547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.630245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.630264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.630270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.646098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.646116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.646126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.662262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.662279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.662286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.677521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.677539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.677548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.692457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.692475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.821 [2024-07-15 21:04:37.692481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.821 [2024-07-15 21:04:37.708178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:33.821 [2024-07-15 21:04:37.708196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.822 [2024-07-15 21:04:37.708202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.725013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.725032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.725038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.740420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.740438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.740444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.755919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.755943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.770655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.770679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.785048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.785066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.785072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.800071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.800088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.815621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.815649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.831137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.831155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.831160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.846439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.846456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.846462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.862872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.862890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.862896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.879084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.148 [2024-07-15 21:04:37.879102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.148 [2024-07-15 21:04:37.879108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.148 [2024-07-15 21:04:37.895013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.895031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.895037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.909957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.909975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.909981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.926090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.926108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.926114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.943222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.943247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.943253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.956391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.956409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.956416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.970748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.970772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.979394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.979410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.979416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:37.996344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:37.996361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:37.996367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:38.006524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:38.006541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:38.006547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:38.021334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:38.021351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:38.021357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.149 [2024-07-15 21:04:38.036279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.149 [2024-07-15 21:04:38.036297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.149 [2024-07-15 21:04:38.036303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.052253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.052270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.052277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.066886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.066904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.066913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.084461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.084478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.084484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.098010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.098029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.098035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.112000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.112018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.112024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.125790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.125808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.125814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.140355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.140372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.140379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.154231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.154248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.154254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.168309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.168327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.168333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.410 [2024-07-15 21:04:38.184832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1084b80) 00:28:34.410 [2024-07-15 21:04:38.184850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.410 [2024-07-15 21:04:38.184856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.410 00:28:34.410 Latency(us) 00:28:34.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.410 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:34.410 nvme0n1 : 2.04 2012.58 251.57 0.00 0.00 7798.53 1686.19 49588.91 00:28:34.410 =================================================================================================================== 00:28:34.410 Total : 2012.58 251.57 0.00 0.00 7798.53 1686.19 49588.91 00:28:34.410 0 00:28:34.410 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:34.410 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:34.410 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:34.410 | .driver_specific 00:28:34.410 | .nvme_error 00:28:34.410 | .status_code 00:28:34.410 | .command_transient_transport_error' 00:28:34.410 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1757838 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1757838 ']' 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1757838 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757838 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757838' 00:28:34.672 killing process with pid 1757838 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1757838 00:28:34.672 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.672 00:28:34.672 Latency(us) 00:28:34.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.672 =================================================================================================================== 00:28:34.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.672 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1757838 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1758670 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1758670 /var/tmp/bperf.sock 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1758670 ']' 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.934 21:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.934 [2024-07-15 21:04:38.633295] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:34.934 [2024-07-15 21:04:38.633350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758670 ] 00:28:34.934 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.934 [2024-07-15 21:04:38.708190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.934 [2024-07-15 21:04:38.762303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.506 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.506 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:35.506 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:35.506 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.767 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.028 nvme0n1 00:28:36.028 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:36.028 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.028 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.028 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.029 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:36.029 21:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.029 Running I/O for 2 seconds... 00:28:36.029 [2024-07-15 21:04:39.895434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.029 [2024-07-15 21:04:39.895726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.029 [2024-07-15 21:04:39.895752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.029 [2024-07-15 21:04:39.907629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.029 [2024-07-15 21:04:39.907927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.029 [2024-07-15 21:04:39.907946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.029 [2024-07-15 21:04:39.919864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.029 [2024-07-15 21:04:39.920150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.029 [2024-07-15 21:04:39.920167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.932070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.932378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.932394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.944255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.944708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.944724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.956415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.956720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.956736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.968644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.968938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.968954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.981024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.981334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:39.993177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:39.993462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:39.993477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.005805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.006198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.006219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.017985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.018466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.018483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.030138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.030555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.030578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.044273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.044671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.044691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.056514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.056966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.056981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.068709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.069018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.069033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.080877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.081147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.081163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.093056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.093487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.093502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.105209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.105472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.117348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.117958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.117974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.290 [2024-07-15 21:04:40.130559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.290 [2024-07-15 21:04:40.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.290 [2024-07-15 21:04:40.130853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.291 [2024-07-15 21:04:40.142713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.291 [2024-07-15 21:04:40.143130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.291 [2024-07-15 21:04:40.143145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.291 [2024-07-15 21:04:40.154852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.291 [2024-07-15 21:04:40.155154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.291 [2024-07-15 21:04:40.155170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.291 [2024-07-15 21:04:40.166977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.291 [2024-07-15 21:04:40.167376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.291 [2024-07-15 21:04:40.167391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.291 [2024-07-15 21:04:40.179108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.291 [2024-07-15 21:04:40.179417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.291 [2024-07-15 21:04:40.179432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.191269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.191555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.191570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.203395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.203687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.203702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.215582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.216053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.216068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.227739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.228219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.228238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.239859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.240335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.252058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.252471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.252487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.264234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.264705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.264720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.276337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.276751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.288471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.288890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.288906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.300605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.301063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.312745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.313183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.313198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.552 [2024-07-15 21:04:40.324875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.552 [2024-07-15 21:04:40.325377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.552 [2024-07-15 21:04:40.325392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.336990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.337276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.337292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.349114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.349540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.349555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.361210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.361580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.361595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.373333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.373628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.373643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.385499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.385914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.385929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.397605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.397890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.397904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.409798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.410251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.410267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.421866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.422137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.422152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.553 [2024-07-15 21:04:40.434060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.553 [2024-07-15 21:04:40.434520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.553 [2024-07-15 21:04:40.434536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.814 [2024-07-15 21:04:40.446147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.814 [2024-07-15 21:04:40.446560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.814 [2024-07-15 21:04:40.446577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.814 [2024-07-15 21:04:40.458391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.814 [2024-07-15 21:04:40.458802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.814 [2024-07-15 21:04:40.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.814 [2024-07-15 21:04:40.470522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.814 [2024-07-15 21:04:40.470949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.814 [2024-07-15 21:04:40.470964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.814 [2024-07-15 21:04:40.482673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.814 [2024-07-15 21:04:40.483077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.483092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.494818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.495329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.495344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.506985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.507471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.507486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.519150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.519584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.531239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.531640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.531655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.543345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.543616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.543635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.555535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.555912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.555927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.567682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.567964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.567980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.579766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.580154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.580169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.591865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.592258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.592273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.603960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.604268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.604284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.616098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.616566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.628223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.628673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.628688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.640294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.640690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.640705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.652464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.652984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.653002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.664560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.664840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.664856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.676707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.677239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.677254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.688857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.689132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.689147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:36.815 [2024-07-15 21:04:40.700969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:36.815 [2024-07-15 21:04:40.701410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.815 [2024-07-15 21:04:40.701425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.076 [2024-07-15 21:04:40.713050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.076 [2024-07-15 21:04:40.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-15 21:04:40.713463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.076 [2024-07-15 21:04:40.725180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.076 [2024-07-15 21:04:40.725598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-15 21:04:40.725614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.737279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.737753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.737768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.749335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.749756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.761507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.761943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.761958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.773663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.774064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.774079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.785796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.786227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.786242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.797912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.798301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.798316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.810011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.810443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.810459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.822130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.822405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.834249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.834546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.834561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.846448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.846914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.846929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.858522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.870646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.871021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.871035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.882775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.883060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.883074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.894913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.895369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.895383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.907024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.907412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.907426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.919190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.919651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.919666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.931341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.931653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.931669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.943436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.943929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.943944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.955577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.955870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.955885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.077 [2024-07-15 21:04:40.967745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.077 [2024-07-15 21:04:40.968039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-15 21:04:40.968057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.339 [2024-07-15 21:04:40.979967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.339 [2024-07-15 21:04:40.980346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.339 [2024-07-15 21:04:40.980360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.339 [2024-07-15 21:04:40.992099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.339 [2024-07-15 21:04:40.992439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.339 [2024-07-15 21:04:40.992454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.339 [2024-07-15 21:04:41.004234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.339 [2024-07-15 21:04:41.004509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.339 [2024-07-15 21:04:41.004524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.339 [2024-07-15 21:04:41.016371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.339 [2024-07-15 21:04:41.016643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.339 [2024-07-15 21:04:41.016658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.339 [2024-07-15 21:04:41.028503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.028971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.028986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.040822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.041256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.041272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.052976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.053427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.053442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.065080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.065499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.065514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.077192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.077602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.077617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.089313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.089767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.089782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.101436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.101870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.101885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.113526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.113947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.113962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.125640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.126117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.137785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.138178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.138193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.149948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.150393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.150408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.162068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.162359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.162374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.174181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.174625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.174640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.186273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.186727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.186742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.198372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.198769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.198784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.210504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.210776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.210791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.340 [2024-07-15 21:04:41.222583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.340 [2024-07-15 21:04:41.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.340 [2024-07-15 21:04:41.223030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.234718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.235109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.246822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.247240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.247256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.258980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.259270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.259284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.271076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.271482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.271497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.283199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.283611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.283626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.295313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.295706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.295721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.307446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.307708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.307723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.319520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.319941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.319956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.331634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.332123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.332139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.343828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.344282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.344297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.355912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.356195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.356210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.368006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.368439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.368454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.380174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.380665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.380680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.392258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.392733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.392751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.404400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.404682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.404697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.416460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.416868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.416884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.428588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.428891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.440726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.441147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.441162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.452829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.453106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.453124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.464980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.465434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.465449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.477086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.477572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.602 [2024-07-15 21:04:41.489290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.602 [2024-07-15 21:04:41.489665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.602 [2024-07-15 21:04:41.489681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.501394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.501729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.501745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.513526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.513937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.513952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.525647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.526045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.526060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.537783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.538101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.538116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.549905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.550331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.550346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.562013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.562478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.562493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.574176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.574465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.574481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.586247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.586637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.586652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.598446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.598730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.864 [2024-07-15 21:04:41.598746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.864 [2024-07-15 21:04:41.610608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.864 [2024-07-15 21:04:41.610906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.610921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.622758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.623033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.623048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.634818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.635216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.635232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.646983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.647432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.647447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.659144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.659526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.659541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.671296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.671680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.671695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.683423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.683734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.683749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.695564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.695970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.707691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.707997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.708015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.719871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.720337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.720352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.732014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.732335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.732350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.744186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:37.865 [2024-07-15 21:04:41.744679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.865 [2024-07-15 21:04:41.744695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.865 [2024-07-15 21:04:41.756270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.756648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.756663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.768422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.768714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.768729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.780574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.781030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.781045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.792757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.793046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.793061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.804860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.805184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.805199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.816936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.817370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.126 [2024-07-15 21:04:41.817386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.126 [2024-07-15 21:04:41.829040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.126 [2024-07-15 21:04:41.829430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.127 [2024-07-15 21:04:41.829445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.127 [2024-07-15 21:04:41.841184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.127 [2024-07-15 21:04:41.841584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.127 [2024-07-15 21:04:41.841599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.127 [2024-07-15 21:04:41.853278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.127 [2024-07-15 21:04:41.853690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.127 [2024-07-15 21:04:41.853705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.127 [2024-07-15 21:04:41.865440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.127 [2024-07-15 21:04:41.865829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.127 [2024-07-15 21:04:41.865844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.127 [2024-07-15 21:04:41.877590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1999aa0) with pdu=0x2000190fa7d8 00:28:38.127 [2024-07-15 21:04:41.877860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.127 [2024-07-15 21:04:41.877876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:38.127 00:28:38.127 Latency(us) 00:28:38.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.127 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.127 nvme0n1 : 2.01 20949.68 81.83 0.00 0.00 6098.18 5079.04 14308.69 00:28:38.127 =================================================================================================================== 00:28:38.127 Total : 20949.68 81.83 0.00 0.00 6098.18 5079.04 14308.69 00:28:38.127 0 00:28:38.127 21:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:38.127 21:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:38.127 21:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:38.127 | .driver_specific 00:28:38.127 | .nvme_error 00:28:38.127 | .status_code 00:28:38.127 | .command_transient_transport_error' 00:28:38.127 21:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1758670 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1758670 ']' 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1758670 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1758670 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1758670' 00:28:38.388 killing process with pid 1758670 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1758670 00:28:38.388 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.388 00:28:38.388 Latency(us) 00:28:38.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.388 =================================================================================================================== 00:28:38.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1758670 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1759415 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1759415 /var/tmp/bperf.sock 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1759415 ']' 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.388 21:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.649 [2024-07-15 21:04:42.290780] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:38.649 [2024-07-15 21:04:42.290834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759415 ] 00:28:38.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.649 Zero copy mechanism will not be used. 00:28:38.649 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.649 [2024-07-15 21:04:42.365157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.649 [2024-07-15 21:04:42.418035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.221 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.221 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:39.221 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.221 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.482 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.742 nvme0n1 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:39.742 21:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.004 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.004 Zero copy mechanism will not be used. 00:28:40.004 Running I/O for 2 seconds... 00:28:40.004 [2024-07-15 21:04:43.727813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.728296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.728322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.743987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.744293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.744312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.755433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.755756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.755773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.765764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.766100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.775531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.775934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.775952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.786843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.787080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.787097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.798095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.798543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.798561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.808990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.809340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.809357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.818911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.819270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.819287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.830068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.830522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.830539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.841259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.841619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.841636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.850558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.850715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.862051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.862408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.862428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.872031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.872398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.872415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.881730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.882120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.004 [2024-07-15 21:04:43.891835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.004 [2024-07-15 21:04:43.892171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.004 [2024-07-15 21:04:43.892188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.901591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.901927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.901944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.911760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.911890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.911905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.922489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.922819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.933219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.933581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.933598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.943763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.944090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.944106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.955200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.955543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.955560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.966119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.966587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.966605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.977023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.977366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.977383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.988415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:43.988763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:43.988779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:43.999694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.000036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.000053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.010282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.010614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.010631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.020395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.020746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.020762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.030632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.030975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.030991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.040762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.041132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.041152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.051615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.051959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.051975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.061814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.062179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.062196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.072747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.073076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.083500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.083833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.083849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.093089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.093413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.093428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.103913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.266 [2024-07-15 21:04:44.104270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.266 [2024-07-15 21:04:44.113115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.266 [2024-07-15 21:04:44.113397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.267 [2024-07-15 21:04:44.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.267 [2024-07-15 21:04:44.122172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.267 [2024-07-15 21:04:44.122498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.267 [2024-07-15 21:04:44.122514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.267 [2024-07-15 21:04:44.132452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.267 [2024-07-15 21:04:44.132724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.267 [2024-07-15 21:04:44.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.267 [2024-07-15 21:04:44.141958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.267 [2024-07-15 21:04:44.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.267 [2024-07-15 21:04:44.142218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.267 [2024-07-15 21:04:44.151733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.267 [2024-07-15 21:04:44.152215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.267 [2024-07-15 21:04:44.152232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.162593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.528 [2024-07-15 21:04:44.162896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.528 [2024-07-15 21:04:44.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.172035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.528 [2024-07-15 21:04:44.172283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.528 [2024-07-15 21:04:44.172300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.181866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.528 [2024-07-15 21:04:44.182263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.528 [2024-07-15 21:04:44.182280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.191633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.528 [2024-07-15 21:04:44.192035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.528 [2024-07-15 21:04:44.192053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.201583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.528 [2024-07-15 21:04:44.201799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.528 [2024-07-15 21:04:44.201816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.528 [2024-07-15 21:04:44.211289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.211611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.211627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.220841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.221073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.221090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.230379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.230685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.230702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.239653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.239955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.239971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.249533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.249916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.259871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.260263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.260281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.270541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.270869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.270885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.280333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.280550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.280567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.290211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.290600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.290617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.300584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.300832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.300852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.310704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.310994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.311011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.320566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.320867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.320884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.330143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.330374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.330391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.339575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.339871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.339887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.348423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.348821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.348838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.358055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.358333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.358349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.367547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.367761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.367778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.377020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.377325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.386823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.387097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.387113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.396236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.396694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.396711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.407206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.407523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.529 [2024-07-15 21:04:44.417226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.529 [2024-07-15 21:04:44.417453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.529 [2024-07-15 21:04:44.417470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.790 [2024-07-15 21:04:44.428057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.790 [2024-07-15 21:04:44.428351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.428368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.438014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.438255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.438272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.448029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.448264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.448281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.458585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.458834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.458850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.469711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.469975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.469991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.481171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.481470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.492570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.492905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.492922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.503838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.504181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.504197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.514229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.514523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.524797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.525117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.525139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.535415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.535731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.535748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.545318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.545780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.545796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.556198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.556431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.556448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.567526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.567884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.567904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.578774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.579188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.589507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.589842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.589859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.600073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.600338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.600355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.609565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.620027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.620314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.620330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.631004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.631250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.631267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.641511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.641945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.652133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.652535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.652552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.663429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.663888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.663905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.791 [2024-07-15 21:04:44.675364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:40.791 [2024-07-15 21:04:44.675786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.791 [2024-07-15 21:04:44.675803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.686214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.686650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.686666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.698631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.698941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.698958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.707874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.708189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.708205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.718712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.719148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.719165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.729693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.730141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.730157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.740130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.740514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.740532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.750484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.750778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.750794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.761264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.761567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.761583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.772275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.772498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.772515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.783642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.783934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.795242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.795643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.795659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.807497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.807812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.807829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.817795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.818111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.818133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.827434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.827722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.827739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.836851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.837213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.837230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.846930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.847293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.847313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.857525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.857750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.857767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.867905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.053 [2024-07-15 21:04:44.868187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.053 [2024-07-15 21:04:44.868204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.053 [2024-07-15 21:04:44.877894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.878367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.878383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.888429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.888605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.888621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.898659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.898910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.898927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.908718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.909068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.909084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.917688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.918057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.918076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.927577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.927903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.927919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.054 [2024-07-15 21:04:44.937428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.054 [2024-07-15 21:04:44.937916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.054 [2024-07-15 21:04:44.937934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.946882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.947156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.947172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.956286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.956647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.956664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.966286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.966617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.966634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.975917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.976175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.976191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.984557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.984786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.984801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:44.995293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:44.995587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:44.995603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.005585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.005800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.005818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.015279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.015515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.015535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.025027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.025306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.025323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.034492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.034841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.034857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.043778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.044127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.052556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.052796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.052813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.062027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.062457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.062474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.072375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.072704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.072720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.082893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.083302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.083319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.315 [2024-07-15 21:04:45.093210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.315 [2024-07-15 21:04:45.093531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.315 [2024-07-15 21:04:45.093548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.102525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.102815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.102833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.111740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.111982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.111998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.120648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.120956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.120973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.130384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.130707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.130723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.139944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.140269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.140285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.149059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.149339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.149356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.158196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.158583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.168064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.168317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.176748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.177006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.177022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.185938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.186219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.186236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.194015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.194315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.194331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.316 [2024-07-15 21:04:45.203019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.316 [2024-07-15 21:04:45.203356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.316 [2024-07-15 21:04:45.203372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.576 [2024-07-15 21:04:45.212701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.576 [2024-07-15 21:04:45.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.576 [2024-07-15 21:04:45.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.576 [2024-07-15 21:04:45.221810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.222093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.222109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.231118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.231609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.231627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.241289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.241627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.241644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.251395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.251814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.251831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.261435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.261743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.261763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.271506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.271815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.271831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.281113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.281469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.281485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.290793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.291035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.291051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.299476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.299768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.299784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.308926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.309195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.317821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.318056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.318074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.327529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.327854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.327871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.338143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.338348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.338368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.347945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.348321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.357870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.358337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.358353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.368415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.368739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.368755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.380090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.380351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.380367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.392147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.392477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.392494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.404133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.404350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.404366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.415607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.416108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.416128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.426681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.427071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.427089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.438558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.438815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.438832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.448335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.448523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.448540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.577 [2024-07-15 21:04:45.458884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.577 [2024-07-15 21:04:45.459400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.577 [2024-07-15 21:04:45.459419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.468769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.469044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.469062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.478964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.479223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.479239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.490081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.490392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.490408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.498794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.499050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.499067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.507612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.507929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.507945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.516825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.517098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.517115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.526435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.526742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.526762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.535294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.544824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.545033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.545050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.553151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.553501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.553519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.561758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.562008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.571009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.571294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.571310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.580675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.580879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.580895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.589855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.590062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.590078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.600009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.600276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.609752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.610071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.610087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.619699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.619896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.630446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.630718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.630738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.640311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.838 [2024-07-15 21:04:45.640609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.838 [2024-07-15 21:04:45.640625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.838 [2024-07-15 21:04:45.649861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.650247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.650265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.839 [2024-07-15 21:04:45.659898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.660184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.660201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.839 [2024-07-15 21:04:45.670002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.670490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.670507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.839 [2024-07-15 21:04:45.679799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.680104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.680120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.839 [2024-07-15 21:04:45.689769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.690032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.690051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.839 [2024-07-15 21:04:45.699996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8eca0) with pdu=0x2000190fef90 00:28:41.839 [2024-07-15 21:04:45.700414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.839 [2024-07-15 21:04:45.700430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.839 00:28:41.839 Latency(us) 00:28:41.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.839 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:41.839 nvme0n1 : 2.01 3029.37 378.67 0.00 0.00 5272.73 3304.11 19223.89 00:28:41.839 =================================================================================================================== 00:28:41.839 Total : 3029.37 378.67 0.00 0.00 5272.73 3304.11 19223.89 00:28:41.839 0 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:42.099 | .driver_specific 00:28:42.099 | .nvme_error 00:28:42.099 | .status_code 00:28:42.099 | .command_transient_transport_error' 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1759415 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1759415 ']' 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1759415 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1759415 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1759415' 00:28:42.099 killing process with pid 1759415 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1759415 00:28:42.099 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.099 00:28:42.099 Latency(us) 00:28:42.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.099 =================================================================================================================== 00:28:42.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.099 21:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1759415 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1756992 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1756992 ']' 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1756992 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756992 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756992' 00:28:42.358 killing process with pid 1756992 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1756992 00:28:42.358 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1756992 00:28:42.618 00:28:42.618 real 0m16.359s 00:28:42.618 user 0m32.197s 00:28:42.618 sys 0m3.214s 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.618 ************************************ 00:28:42.618 END TEST nvmf_digest_error 00:28:42.618 ************************************ 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:42.618 rmmod nvme_tcp 00:28:42.618 rmmod nvme_fabrics 00:28:42.618 rmmod nvme_keyring 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1756992 ']' 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1756992 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1756992 ']' 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1756992 00:28:42.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1756992) - No such process 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1756992 is not found' 00:28:42.618 Process with pid 1756992 is not found 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.618 21:04:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.583 21:04:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.583 00:28:44.583 real 0m42.274s 00:28:44.583 user 1m6.490s 00:28:44.583 sys 0m11.753s 00:28:44.583 21:04:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.583 21:04:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:44.583 ************************************ 00:28:44.583 END TEST nvmf_digest 00:28:44.583 ************************************ 00:28:44.844 21:04:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:44.845 21:04:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:44.845 21:04:48 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:44.845 21:04:48 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:44.845 21:04:48 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:44.845 21:04:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:44.845 21:04:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.845 21:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:44.845 ************************************ 00:28:44.845 START TEST nvmf_bdevperf 00:28:44.845 ************************************ 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:44.845 * Looking for test storage... 00:28:44.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.845 21:04:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.990 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:52.991 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:52.991 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:52.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:52.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:28:52.991 00:28:52.991 --- 10.0.0.2 ping statistics --- 00:28:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.991 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:28:52.991 00:28:52.991 --- 10.0.0.1 ping statistics --- 00:28:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.991 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1764137 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1764137 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1764137 ']' 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.991 21:04:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.991 [2024-07-15 21:04:55.744880] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:52.991 [2024-07-15 21:04:55.744940] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.991 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.991 [2024-07-15 21:04:55.807061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:52.991 [2024-07-15 21:04:55.883719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.991 [2024-07-15 21:04:55.883772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.991 [2024-07-15 21:04:55.883778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.991 [2024-07-15 21:04:55.883784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.991 [2024-07-15 21:04:55.883788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.991 [2024-07-15 21:04:55.884108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.991 [2024-07-15 21:04:55.884249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.991 [2024-07-15 21:04:55.884443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.991 [2024-07-15 21:04:56.613760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.991 Malloc0 00:28:52.991 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.992 [2024-07-15 21:04:56.690046] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.992 { 00:28:52.992 "params": { 00:28:52.992 "name": "Nvme$subsystem", 00:28:52.992 "trtype": "$TEST_TRANSPORT", 00:28:52.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.992 "adrfam": "ipv4", 00:28:52.992 "trsvcid": "$NVMF_PORT", 00:28:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.992 "hdgst": ${hdgst:-false}, 00:28:52.992 "ddgst": ${ddgst:-false} 00:28:52.992 }, 00:28:52.992 "method": "bdev_nvme_attach_controller" 00:28:52.992 } 00:28:52.992 EOF 00:28:52.992 )") 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:52.992 21:04:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:52.992 "params": { 00:28:52.992 "name": "Nvme1", 00:28:52.992 "trtype": "tcp", 00:28:52.992 "traddr": "10.0.0.2", 00:28:52.992 "adrfam": "ipv4", 00:28:52.992 "trsvcid": "4420", 00:28:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.992 "hdgst": false, 00:28:52.992 "ddgst": false 00:28:52.992 }, 00:28:52.992 "method": "bdev_nvme_attach_controller" 00:28:52.992 }' 00:28:52.992 [2024-07-15 21:04:56.743263] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:52.992 [2024-07-15 21:04:56.743313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764480 ] 00:28:52.992 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.992 [2024-07-15 21:04:56.801768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.992 [2024-07-15 21:04:56.866357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.253 Running I/O for 1 seconds... 00:28:54.272 00:28:54.272 Latency(us) 00:28:54.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.272 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:54.272 Verification LBA range: start 0x0 length 0x4000 00:28:54.272 Nvme1n1 : 1.01 9227.64 36.05 0.00 0.00 13801.00 1331.20 17257.81 00:28:54.272 =================================================================================================================== 00:28:54.272 Total : 9227.64 36.05 0.00 0.00 13801.00 1331.20 17257.81 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1764820 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.533 { 00:28:54.533 "params": { 00:28:54.533 "name": "Nvme$subsystem", 00:28:54.533 "trtype": "$TEST_TRANSPORT", 00:28:54.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.533 "adrfam": "ipv4", 00:28:54.533 "trsvcid": "$NVMF_PORT", 00:28:54.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.533 "hdgst": ${hdgst:-false}, 00:28:54.533 "ddgst": ${ddgst:-false} 00:28:54.533 }, 00:28:54.533 "method": "bdev_nvme_attach_controller" 00:28:54.533 } 00:28:54.533 EOF 00:28:54.533 )") 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:54.533 21:04:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:54.533 "params": { 00:28:54.533 "name": "Nvme1", 00:28:54.533 "trtype": "tcp", 00:28:54.533 "traddr": "10.0.0.2", 00:28:54.533 "adrfam": "ipv4", 00:28:54.533 "trsvcid": "4420", 00:28:54.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.533 "hdgst": false, 00:28:54.533 "ddgst": false 00:28:54.533 }, 00:28:54.533 "method": "bdev_nvme_attach_controller" 00:28:54.533 }' 00:28:54.533 [2024-07-15 21:04:58.243582] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:28:54.533 [2024-07-15 21:04:58.243638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764820 ] 00:28:54.533 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.533 [2024-07-15 21:04:58.302334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.533 [2024-07-15 21:04:58.364721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.794 Running I/O for 15 seconds... 00:28:57.342 21:05:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1764137 00:28:57.342 21:05:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:57.342 [2024-07-15 21:05:01.209172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.342 [2024-07-15 21:05:01.209212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.342 [2024-07-15 21:05:01.209234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.342 [2024-07-15 21:05:01.209245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.342 [2024-07-15 21:05:01.209257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.342 [2024-07-15 21:05:01.209265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.342 [2024-07-15 21:05:01.209280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.342 [2024-07-15 21:05:01.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.343 [2024-07-15 21:05:01.209974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.209984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.343 [2024-07-15 21:05:01.209990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.343 [2024-07-15 21:05:01.210000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.344 [2024-07-15 21:05:01.210330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.344 [2024-07-15 21:05:01.210785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.344 [2024-07-15 21:05:01.210794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.210991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.210998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.345 [2024-07-15 21:05:01.211183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.345 [2024-07-15 21:05:01.211428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2138a00 is same with the state(5) to be set 00:28:57.345 [2024-07-15 21:05:01.211444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.345 [2024-07-15 21:05:01.211450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.345 [2024-07-15 21:05:01.211456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:28:57.345 [2024-07-15 21:05:01.211464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.345 [2024-07-15 21:05:01.211504] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2138a00 was disconnected and freed. reset controller. 00:28:57.345 [2024-07-15 21:05:01.215051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.345 [2024-07-15 21:05:01.215099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.345 [2024-07-15 21:05:01.215887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.346 [2024-07-15 21:05:01.215903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.346 [2024-07-15 21:05:01.215911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.346 [2024-07-15 21:05:01.216136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.346 [2024-07-15 21:05:01.216356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.346 [2024-07-15 21:05:01.216364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.346 [2024-07-15 21:05:01.216372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.346 [2024-07-15 21:05:01.219916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.346 [2024-07-15 21:05:01.229136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.346 [2024-07-15 21:05:01.229797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.346 [2024-07-15 21:05:01.229813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.346 [2024-07-15 21:05:01.229821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.346 [2024-07-15 21:05:01.230040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.346 [2024-07-15 21:05:01.230264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.346 [2024-07-15 21:05:01.230272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.346 [2024-07-15 21:05:01.230279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.607 [2024-07-15 21:05:01.233832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.607 [2024-07-15 21:05:01.243060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.607 [2024-07-15 21:05:01.243692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-07-15 21:05:01.243708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.607 [2024-07-15 21:05:01.243716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.607 [2024-07-15 21:05:01.243935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.607 [2024-07-15 21:05:01.244159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.607 [2024-07-15 21:05:01.244168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.607 [2024-07-15 21:05:01.244175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.607 [2024-07-15 21:05:01.247733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.607 [2024-07-15 21:05:01.256948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.607 [2024-07-15 21:05:01.257658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.607 [2024-07-15 21:05:01.257673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.607 [2024-07-15 21:05:01.257681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.257900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.258119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.258132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.258139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.261683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.270892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.271548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.271586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.271597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.271838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.272062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.272070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.272078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.275635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.284847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.285514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.285553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.285564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.285803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.286026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.286035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.286043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.289607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.298826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.299328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.299365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.299376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.299620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.299844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.299852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.299860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.303425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.312639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.313389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.313426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.313437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.313676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.313899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.313907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.313915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.317473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.326469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.327227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.327264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.327275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.327514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.327737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.327746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.327753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.331313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.340323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.341092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.341136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.341149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.341392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.341616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.341624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.341636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.345198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.354215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.354983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.355020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.355031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.355277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.355501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.355509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.355517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.359068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.368072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.368844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.368882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.368892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.369140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.608 [2024-07-15 21:05:01.369364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.608 [2024-07-15 21:05:01.369372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.608 [2024-07-15 21:05:01.369380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.608 [2024-07-15 21:05:01.372932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.608 [2024-07-15 21:05:01.381941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.608 [2024-07-15 21:05:01.382692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.608 [2024-07-15 21:05:01.382729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.608 [2024-07-15 21:05:01.382740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.608 [2024-07-15 21:05:01.382978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.383210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.383219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.383226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.386776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.395773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.396503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.396545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.396557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.396795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.397018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.397027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.397035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.400597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.409592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.410195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.410233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.410245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.410487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.410711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.410719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.410726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.414285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.423499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.424227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.424277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.424517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.424740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.424749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.424756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.428315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.437314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.438038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.438074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.438084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.438330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.438559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.438567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.438575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.442127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.451238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.451989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.452026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.452037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.452285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.452509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.452517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.452525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.456077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.465081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.465721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.465739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.465747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.465966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.466191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.466200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.466207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.469753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.478957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.479701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.479739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.479751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.479991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.480222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.480232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.480239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.483798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.609 [2024-07-15 21:05:01.492804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.609 [2024-07-15 21:05:01.493535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.609 [2024-07-15 21:05:01.493572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.609 [2024-07-15 21:05:01.493583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.609 [2024-07-15 21:05:01.493821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.609 [2024-07-15 21:05:01.494045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.609 [2024-07-15 21:05:01.494053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.609 [2024-07-15 21:05:01.494061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.609 [2024-07-15 21:05:01.497621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.506631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.507274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.507311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.507322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.507561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.507784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.507793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.507800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.511354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.520556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.521212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.521250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.521260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.521500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.521723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.521732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.521739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.525304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.534521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.535209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.535246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.535263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.535504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.535727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.535735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.535743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.539301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.548499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.549176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.549201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.549210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.549435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.549655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.549664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.549671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.553242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.562441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.563148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.563185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.563196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.563435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.563658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.563666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.563674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.567233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.576236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.576998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.577035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.577045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.577293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.577516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.577529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.577536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.581083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.590089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.590725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.590744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.590752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.590971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.591194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.591203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.591210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.594756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.603956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.604742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.604780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.604791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.605030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.605260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.605269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.605277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.608824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.617827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.618469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.618488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.618496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.618715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.618934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.618942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.618948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.622495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.631710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.632466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.632503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.632514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.632753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.632977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.872 [2024-07-15 21:05:01.632985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.872 [2024-07-15 21:05:01.632993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.872 [2024-07-15 21:05:01.636548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.872 [2024-07-15 21:05:01.645621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.872 [2024-07-15 21:05:01.646394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.872 [2024-07-15 21:05:01.646432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.872 [2024-07-15 21:05:01.646443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.872 [2024-07-15 21:05:01.646682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.872 [2024-07-15 21:05:01.646906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.646914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.646922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.650477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.659493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.660202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.660239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.660250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.660489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.660713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.660721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.660729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.664283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.673481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.674084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.674121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.674145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.674386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.674609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.674618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.674625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.678178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.687383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.688161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.688197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.688208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.688447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.688669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.688678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.688685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.692247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.701246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.701962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.702000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.702010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.702257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.702481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.702489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.702497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.706046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.715058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.715837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.715874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.715885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.716133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.716357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.716371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.716379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.719932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.728957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.729688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.729726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.729738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.729980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.730212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.730221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.730229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.733784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.742781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.743431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.743449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.743457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.743677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.743897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.743904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.743911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.747460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.873 [2024-07-15 21:05:01.756676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.873 [2024-07-15 21:05:01.757423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.873 [2024-07-15 21:05:01.757460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:57.873 [2024-07-15 21:05:01.757471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:57.873 [2024-07-15 21:05:01.757710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:57.873 [2024-07-15 21:05:01.757933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.873 [2024-07-15 21:05:01.757941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.873 [2024-07-15 21:05:01.757949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.873 [2024-07-15 21:05:01.761513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.770512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.771301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.771338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.771349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.771588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.771811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.771819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.771826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.775388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.784402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.785168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.785206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.785218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.785460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.785684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.785692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.785700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.789267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.798274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.798978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.799014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.799025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.799271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.799496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.799505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.799513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.803065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.812072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.812842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.812879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.812890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.813141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.813365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.813373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.813380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.816937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.825932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.826555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.826592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.826603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.826842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.827065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.827073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.827081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.830644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.839850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.840576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.840613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.840624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.840863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.841086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.841095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.841102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.844661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.853666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.854422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.854459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.854469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.854708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.854931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.854940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.135 [2024-07-15 21:05:01.854952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.135 [2024-07-15 21:05:01.858511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.135 [2024-07-15 21:05:01.867520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.135 [2024-07-15 21:05:01.868273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-07-15 21:05:01.868310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.135 [2024-07-15 21:05:01.868321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.135 [2024-07-15 21:05:01.868559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.135 [2024-07-15 21:05:01.868782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.135 [2024-07-15 21:05:01.868790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.868798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.872359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.881348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.882113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.882156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.882168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.882406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.882629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.882638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.882645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.886196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.895192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.895953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.895990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.896000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.896247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.896471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.896479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.896486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.900039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.909032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.909799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.909841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.909852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.910091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.910323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.910333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.910340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.913884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.922868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.923635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.923672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.923683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.923921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.924153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.924162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.924170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.927717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.936714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.937257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.937276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.937284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.937503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.937722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.937729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.937736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.941282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.950691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.951443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.951480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.951490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.951729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.951960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.951969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.951976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.955536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.964677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.965463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.965500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.965511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.965750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.965973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.965981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.965988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.969558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.978562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.979342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.979378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.979389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.979628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.979851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.979859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.979867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.983428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.136 [2024-07-15 21:05:01.992430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.136 [2024-07-15 21:05:01.993191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-07-15 21:05:01.993228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.136 [2024-07-15 21:05:01.993238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.136 [2024-07-15 21:05:01.993477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.136 [2024-07-15 21:05:01.993700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.136 [2024-07-15 21:05:01.993708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.136 [2024-07-15 21:05:01.993716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.136 [2024-07-15 21:05:01.997280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.137 [2024-07-15 21:05:02.006285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.137 [2024-07-15 21:05:02.007004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-07-15 21:05:02.007041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.137 [2024-07-15 21:05:02.007052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.137 [2024-07-15 21:05:02.007300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.137 [2024-07-15 21:05:02.007524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.137 [2024-07-15 21:05:02.007533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.137 [2024-07-15 21:05:02.007540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.137 [2024-07-15 21:05:02.011089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.137 [2024-07-15 21:05:02.020081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.137 [2024-07-15 21:05:02.020850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-07-15 21:05:02.020887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.137 [2024-07-15 21:05:02.020897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.137 [2024-07-15 21:05:02.021145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.137 [2024-07-15 21:05:02.021369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.137 [2024-07-15 21:05:02.021378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.137 [2024-07-15 21:05:02.021385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.137 [2024-07-15 21:05:02.024939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.034173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.034947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.034984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.034995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.035242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.035465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.035474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.035481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.039031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.048024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.048654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.048691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.048706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.048944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.049176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.049186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.049194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.052751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.061949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.062713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.062750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.062761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.063000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.063233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.063242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.063250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.066800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.075860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.076643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.076679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.076690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.076929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.077161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.077170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.077178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.080724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.089716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.090443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.090481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.090491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.090730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.090953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.090966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.090973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.094528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.399 [2024-07-15 21:05:02.103537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.399 [2024-07-15 21:05:02.104228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-07-15 21:05:02.104265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.399 [2024-07-15 21:05:02.104275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.399 [2024-07-15 21:05:02.104514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.399 [2024-07-15 21:05:02.104737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.399 [2024-07-15 21:05:02.104746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.399 [2024-07-15 21:05:02.104753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.399 [2024-07-15 21:05:02.108313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.117511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.118278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.118315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.118326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.118565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.118788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.118796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.118804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.122356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.131356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.131892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.131909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.131917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.132143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.132363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.132370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.132377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.135923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.145156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.145906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.145943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.145954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.146202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.146426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.146434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.146441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.149987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.158984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.159711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.159747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.159758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.159997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.160226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.160235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.160243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.163797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.172787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.173551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.173588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.173598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.173837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.174060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.174068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.174076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.177641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.186650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.187417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.187454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.187465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.187708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.187932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.187940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.187947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.191506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.200504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.201172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.201191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.201199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.201419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.201637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.201645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.201652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.205197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.214388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.215151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.215188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.215200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.215443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.215666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.215674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.215682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.219233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.228246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.228795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-07-15 21:05:02.228832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.400 [2024-07-15 21:05:02.228842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.400 [2024-07-15 21:05:02.229081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.400 [2024-07-15 21:05:02.229315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.400 [2024-07-15 21:05:02.229324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.400 [2024-07-15 21:05:02.229336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.400 [2024-07-15 21:05:02.232883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.400 [2024-07-15 21:05:02.242097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-07-15 21:05:02.242818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-07-15 21:05:02.242854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.401 [2024-07-15 21:05:02.242865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.401 [2024-07-15 21:05:02.243104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.401 [2024-07-15 21:05:02.243337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.401 [2024-07-15 21:05:02.243346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.401 [2024-07-15 21:05:02.243353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.401 [2024-07-15 21:05:02.246904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.401 [2024-07-15 21:05:02.255997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.401 [2024-07-15 21:05:02.256730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-07-15 21:05:02.256767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.401 [2024-07-15 21:05:02.256778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.401 [2024-07-15 21:05:02.257017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.401 [2024-07-15 21:05:02.257247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.401 [2024-07-15 21:05:02.257256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.401 [2024-07-15 21:05:02.257264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.401 [2024-07-15 21:05:02.260819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.401 [2024-07-15 21:05:02.269830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.401 [2024-07-15 21:05:02.270559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-07-15 21:05:02.270596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.401 [2024-07-15 21:05:02.270607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.401 [2024-07-15 21:05:02.270846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.401 [2024-07-15 21:05:02.271069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.401 [2024-07-15 21:05:02.271078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.401 [2024-07-15 21:05:02.271085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.401 [2024-07-15 21:05:02.274644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.401 [2024-07-15 21:05:02.283656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.401 [2024-07-15 21:05:02.284410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-07-15 21:05:02.284447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.401 [2024-07-15 21:05:02.284458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.401 [2024-07-15 21:05:02.284696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.401 [2024-07-15 21:05:02.284920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.401 [2024-07-15 21:05:02.284928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.401 [2024-07-15 21:05:02.284935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.401 [2024-07-15 21:05:02.288493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.297493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.298165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.298183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.298191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.298411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.298630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.298638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.298644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.302188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.311399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.312144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.312181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.312192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.312431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.312654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.312662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.312670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.316224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.325215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.325937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.325974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.325984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.326235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.326460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.326468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.326475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.330027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.339024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.339765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.339803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.339814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.340053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.340285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.340294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.340301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.343858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.352861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.353606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.353642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.353653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.353892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.354114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.354132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.354140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.357695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.366695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.367462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.367499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.367509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.367748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.367971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.367979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.367991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.371548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.380545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.381228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.381264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.381277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.381516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.381739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.381747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.381755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.385322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.394535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.395092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.395136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.663 [2024-07-15 21:05:02.395148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.663 [2024-07-15 21:05:02.395386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.663 [2024-07-15 21:05:02.395609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.663 [2024-07-15 21:05:02.395617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.663 [2024-07-15 21:05:02.395624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.663 [2024-07-15 21:05:02.399179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.663 [2024-07-15 21:05:02.408384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.663 [2024-07-15 21:05:02.409129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.663 [2024-07-15 21:05:02.409165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.409176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.409415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.409638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.409646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.409653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.413208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.422202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.422954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.422995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.423006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.423254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.423477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.423485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.423493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.427043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.436041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.436745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.436782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.436793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.437031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.437263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.437272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.437279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.440833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.450034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.450706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.450743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.450754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.450993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.451225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.451234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.451241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.454793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.464003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.464730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.464767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.464778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.465017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.465252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.465262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.465269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.468817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.477927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.478697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.478734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.478744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.478983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.479215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.479224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.479232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.482782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.491778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.492503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.492539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.492550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.492788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.493011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.493020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.493027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.496585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.505582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.506363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.506401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.506412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.506651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.506874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.506882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.506889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.510454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.519451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.520119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.520142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.520149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.664 [2024-07-15 21:05:02.520368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.664 [2024-07-15 21:05:02.520587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.664 [2024-07-15 21:05:02.520594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.664 [2024-07-15 21:05:02.520601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.664 [2024-07-15 21:05:02.524147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.664 [2024-07-15 21:05:02.533344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.664 [2024-07-15 21:05:02.533990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.664 [2024-07-15 21:05:02.534005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.664 [2024-07-15 21:05:02.534013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.665 [2024-07-15 21:05:02.534237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.665 [2024-07-15 21:05:02.534456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.665 [2024-07-15 21:05:02.534464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.665 [2024-07-15 21:05:02.534470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.665 [2024-07-15 21:05:02.538038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.665 [2024-07-15 21:05:02.547240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.665 [2024-07-15 21:05:02.547843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.665 [2024-07-15 21:05:02.547880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.665 [2024-07-15 21:05:02.547890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.665 [2024-07-15 21:05:02.548138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.665 [2024-07-15 21:05:02.548362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.665 [2024-07-15 21:05:02.548370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.665 [2024-07-15 21:05:02.548378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.665 [2024-07-15 21:05:02.551942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.561150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.561897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.561934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.561949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.562196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.562419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.562428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.562435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.565986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.574980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.575720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.575757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.575768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.576007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.576238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.576247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.576254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.579806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.588804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.589537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.589574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.589585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.589824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.590047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.590055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.590062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.593627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.602642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.603400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.603437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.603447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.603686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.603909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.603921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.603929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.607488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.616488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.617211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.617248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.617259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.617498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.617721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.617729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.617736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.621296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.630287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.631048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.927 [2024-07-15 21:05:02.631084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.927 [2024-07-15 21:05:02.631095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.927 [2024-07-15 21:05:02.631343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.927 [2024-07-15 21:05:02.631567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.927 [2024-07-15 21:05:02.631575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.927 [2024-07-15 21:05:02.631582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.927 [2024-07-15 21:05:02.635137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.927 [2024-07-15 21:05:02.644139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.927 [2024-07-15 21:05:02.644799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.644836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.644847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.645085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.645316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.645325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.645332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.648881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.658099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.658822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.658859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.658870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.659108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.659338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.659347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.659354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.662909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.671910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.672631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.672668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.672679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.672919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.673151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.673161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.673168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.676722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.685715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.686353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.686372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.686380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.686599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.686819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.686826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.686833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.690386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.699603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.700352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.700390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.700402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.700646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.700870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.700879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.700886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.704445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.713449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.714150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.714187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.714199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.714441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.714665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.714673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.714680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.718235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.727454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.728085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.728102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.728111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.728364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.728586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.728593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.728600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.732153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.741357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.742088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.742132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.742146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.742385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.742609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.742617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.742630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.746184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.755191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.755944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.755981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.755991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.928 [2024-07-15 21:05:02.756238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.928 [2024-07-15 21:05:02.756462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.928 [2024-07-15 21:05:02.756470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.928 [2024-07-15 21:05:02.756478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.928 [2024-07-15 21:05:02.760043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.928 [2024-07-15 21:05:02.769053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.928 [2024-07-15 21:05:02.769669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.928 [2024-07-15 21:05:02.769706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.928 [2024-07-15 21:05:02.769718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.929 [2024-07-15 21:05:02.769956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.929 [2024-07-15 21:05:02.770190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.929 [2024-07-15 21:05:02.770199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.929 [2024-07-15 21:05:02.770206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.929 [2024-07-15 21:05:02.773758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.929 [2024-07-15 21:05:02.782959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.929 [2024-07-15 21:05:02.783750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.929 [2024-07-15 21:05:02.783787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.929 [2024-07-15 21:05:02.783797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.929 [2024-07-15 21:05:02.784036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.929 [2024-07-15 21:05:02.784266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.929 [2024-07-15 21:05:02.784275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.929 [2024-07-15 21:05:02.784283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.929 [2024-07-15 21:05:02.787831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.929 [2024-07-15 21:05:02.796830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.929 [2024-07-15 21:05:02.797556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.929 [2024-07-15 21:05:02.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.929 [2024-07-15 21:05:02.797603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.929 [2024-07-15 21:05:02.797842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.929 [2024-07-15 21:05:02.798064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.929 [2024-07-15 21:05:02.798072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.929 [2024-07-15 21:05:02.798080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.929 [2024-07-15 21:05:02.801646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.929 [2024-07-15 21:05:02.810649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.929 [2024-07-15 21:05:02.811398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.929 [2024-07-15 21:05:02.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:58.929 [2024-07-15 21:05:02.811447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:58.929 [2024-07-15 21:05:02.811686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:58.929 [2024-07-15 21:05:02.811909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.929 [2024-07-15 21:05:02.811917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.929 [2024-07-15 21:05:02.811924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.929 [2024-07-15 21:05:02.815484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.190 [2024-07-15 21:05:02.824484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.190 [2024-07-15 21:05:02.825229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.190 [2024-07-15 21:05:02.825266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.190 [2024-07-15 21:05:02.825278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.190 [2024-07-15 21:05:02.825521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.190 [2024-07-15 21:05:02.825745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.190 [2024-07-15 21:05:02.825753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.190 [2024-07-15 21:05:02.825760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.190 [2024-07-15 21:05:02.829322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.190 [2024-07-15 21:05:02.838331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.190 [2024-07-15 21:05:02.839002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.190 [2024-07-15 21:05:02.839020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.190 [2024-07-15 21:05:02.839028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.190 [2024-07-15 21:05:02.839253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.190 [2024-07-15 21:05:02.839478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.190 [2024-07-15 21:05:02.839486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.190 [2024-07-15 21:05:02.839493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.190 [2024-07-15 21:05:02.843035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.190 [2024-07-15 21:05:02.852246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.190 [2024-07-15 21:05:02.852891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.190 [2024-07-15 21:05:02.852906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.190 [2024-07-15 21:05:02.852913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.190 [2024-07-15 21:05:02.853137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.190 [2024-07-15 21:05:02.853357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.190 [2024-07-15 21:05:02.853364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.190 [2024-07-15 21:05:02.853371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.190 [2024-07-15 21:05:02.856911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.866109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.866771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.866786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.866793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.867011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.867235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.867243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.867250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.870789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.879977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.880722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.880759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.880769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.881008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.881239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.881248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.881256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.884811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.893813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.894355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.894392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.894403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.894642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.894865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.894873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.894880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.898439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.907654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.908447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.908484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.908494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.908733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.908956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.908965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.908972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.912531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.921533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.922181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.922218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.922231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.922471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.922694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.922702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.922710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.926268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.935478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.936228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.936270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.936282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.936525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.936748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.936756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.936764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.940324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.949326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.949999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.950037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.950047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.950293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.950517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.950525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.950533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.954094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.963310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.963983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.964001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.964009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.964234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.964453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.964461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.964468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.968008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.977214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.977966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.978003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.978014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.978259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.978487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.978496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.978503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.982058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:02.991055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:02.991788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:02.991825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:02.991836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:02.992075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:02.992306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:02.992315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:02.992323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.191 [2024-07-15 21:05:02.995872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.191 [2024-07-15 21:05:03.004869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.191 [2024-07-15 21:05:03.005526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.191 [2024-07-15 21:05:03.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.191 [2024-07-15 21:05:03.005552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.191 [2024-07-15 21:05:03.005772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.191 [2024-07-15 21:05:03.005990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.191 [2024-07-15 21:05:03.005998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.191 [2024-07-15 21:05:03.006005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.009557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.192 [2024-07-15 21:05:03.018760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.192 [2024-07-15 21:05:03.019400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.192 [2024-07-15 21:05:03.019437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.192 [2024-07-15 21:05:03.019448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.192 [2024-07-15 21:05:03.019687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.192 [2024-07-15 21:05:03.019909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.192 [2024-07-15 21:05:03.019918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.192 [2024-07-15 21:05:03.019925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.023486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.192 [2024-07-15 21:05:03.032700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.192 [2024-07-15 21:05:03.033422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.192 [2024-07-15 21:05:03.033458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.192 [2024-07-15 21:05:03.033469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.192 [2024-07-15 21:05:03.033708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.192 [2024-07-15 21:05:03.033931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.192 [2024-07-15 21:05:03.033939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.192 [2024-07-15 21:05:03.033946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.037507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.192 [2024-07-15 21:05:03.046515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.192 [2024-07-15 21:05:03.047230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.192 [2024-07-15 21:05:03.047267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.192 [2024-07-15 21:05:03.047280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.192 [2024-07-15 21:05:03.047522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.192 [2024-07-15 21:05:03.047745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.192 [2024-07-15 21:05:03.047753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.192 [2024-07-15 21:05:03.047761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.051330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.192 [2024-07-15 21:05:03.060326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.192 [2024-07-15 21:05:03.061098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.192 [2024-07-15 21:05:03.061142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.192 [2024-07-15 21:05:03.061153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.192 [2024-07-15 21:05:03.061392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.192 [2024-07-15 21:05:03.061616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.192 [2024-07-15 21:05:03.061624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.192 [2024-07-15 21:05:03.061631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.065184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.192 [2024-07-15 21:05:03.074186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.192 [2024-07-15 21:05:03.074937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.192 [2024-07-15 21:05:03.074975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.192 [2024-07-15 21:05:03.074990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.192 [2024-07-15 21:05:03.075236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.192 [2024-07-15 21:05:03.075460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.192 [2024-07-15 21:05:03.075468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.192 [2024-07-15 21:05:03.075475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.192 [2024-07-15 21:05:03.079026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.088021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.088657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.088675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.088683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.088902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.089127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.089135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.089142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.092685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.101892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.102595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.102632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.102643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.102882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.103105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.103113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.103130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.106680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.115688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.116825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.116857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.116867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.117106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.117339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.117354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.117362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.120912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.129507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.130370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.130408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.130419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.130658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.130881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.130889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.130897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.134452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.143460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.144102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.144120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.144134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.144354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.144574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.144581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.144588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.148136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.157340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.158049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.158086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.158098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.158348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.158572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.158581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.158588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.162145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.171144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.171925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.171963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.171975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.172225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.172449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.172458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.172465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.176018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.454 [2024-07-15 21:05:03.185017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.454 [2024-07-15 21:05:03.185788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.454 [2024-07-15 21:05:03.185825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.454 [2024-07-15 21:05:03.185837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.454 [2024-07-15 21:05:03.186078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.454 [2024-07-15 21:05:03.186307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.454 [2024-07-15 21:05:03.186316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.454 [2024-07-15 21:05:03.186323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.454 [2024-07-15 21:05:03.189869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.198865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.199606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.199643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.199656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.199896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.200119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.200136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.200143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.203693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.212697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.213436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.213472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.213484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.213729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.213952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.213960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.213968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.217536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.226546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.227233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.227270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.227282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.227524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.227747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.227755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.227762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.231324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.240535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.241276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.241313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.241325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.241565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.241788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.241796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.241804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.245359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.254370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.255179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.255216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.255229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.255471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.255694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.255702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.255714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.259272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.268266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.268891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.268917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.269143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.269362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.269370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.269377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.272923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.282127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.282749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.282786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.282797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.283036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.283266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.283275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.283283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.286834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.295931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.296687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.296724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.296734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.296973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.297204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.297213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.297220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.300774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.309772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.310437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.310456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.310464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.310684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.310903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.310910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.310917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.314462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.323671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.324383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.324420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.324431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.324670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.324893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.324901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.324909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.455 [2024-07-15 21:05:03.328470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.455 [2024-07-15 21:05:03.337470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.455 [2024-07-15 21:05:03.338228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.455 [2024-07-15 21:05:03.338265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.455 [2024-07-15 21:05:03.338278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.455 [2024-07-15 21:05:03.338520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.455 [2024-07-15 21:05:03.338743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.455 [2024-07-15 21:05:03.338752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.455 [2024-07-15 21:05:03.338759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.456 [2024-07-15 21:05:03.342315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.351315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.351983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.352000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.352008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.352232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.352457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.352465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.352472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.356015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.365221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.366003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.366040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.366050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.366296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.366520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.366528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.366535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.370086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.379085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.379820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.379857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.379868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.380106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.380338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.380347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.380354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.383903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.392899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.393576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.393613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.393625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.393865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.394088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.394096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.394104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.397670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.406880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.407613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.407650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.407661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.407899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.408130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.408139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.408147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.411700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.420700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.421434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.421471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.421483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.421721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.421944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.421952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.421960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.425523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.434575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.435420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.435457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.435468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.435708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.435931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.435939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.435947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.439502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.448502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.449210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.717 [2024-07-15 21:05:03.449247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.717 [2024-07-15 21:05:03.449262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.717 [2024-07-15 21:05:03.449501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.717 [2024-07-15 21:05:03.449724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.717 [2024-07-15 21:05:03.449732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.717 [2024-07-15 21:05:03.449740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.717 [2024-07-15 21:05:03.453309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.717 [2024-07-15 21:05:03.462303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.717 [2024-07-15 21:05:03.463073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.463110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.463120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.463368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.463592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.463600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.463607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.467161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.476153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.476876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.476913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.476924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.477171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.477395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.477403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.477410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.480959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.489955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.490722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.490758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.490769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.491008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.491241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.491255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.491262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.494817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.503893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.504620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.504657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.504668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.504907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.505138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.505147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.505154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.508706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.517696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.518416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.518453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.518463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.518702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.518925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.518933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.518941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.522502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.531500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.532087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.532131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.532144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.532382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.532605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.532613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.532620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.536170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.545379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.546095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.546139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.546150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.546389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.546612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.546620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.546627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.550184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.559193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.559919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.559967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.560213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.560438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.560446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.560453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.564002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.572996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.573703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.573740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.573751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.573990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.718 [2024-07-15 21:05:03.574222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.718 [2024-07-15 21:05:03.574231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.718 [2024-07-15 21:05:03.574239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.718 [2024-07-15 21:05:03.577787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.718 [2024-07-15 21:05:03.586783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.718 [2024-07-15 21:05:03.587489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.718 [2024-07-15 21:05:03.587526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.718 [2024-07-15 21:05:03.587541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.718 [2024-07-15 21:05:03.587781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.719 [2024-07-15 21:05:03.588004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.719 [2024-07-15 21:05:03.588012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.719 [2024-07-15 21:05:03.588019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.719 [2024-07-15 21:05:03.591580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.719 [2024-07-15 21:05:03.600577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.719 [2024-07-15 21:05:03.601264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.719 [2024-07-15 21:05:03.601301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.719 [2024-07-15 21:05:03.601312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.719 [2024-07-15 21:05:03.601551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.719 [2024-07-15 21:05:03.601774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.719 [2024-07-15 21:05:03.601782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.719 [2024-07-15 21:05:03.601789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.719 [2024-07-15 21:05:03.605351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.980 [2024-07-15 21:05:03.614560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.980 [2024-07-15 21:05:03.615239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.980 [2024-07-15 21:05:03.615276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.980 [2024-07-15 21:05:03.615287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.980 [2024-07-15 21:05:03.615526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.980 [2024-07-15 21:05:03.615750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.980 [2024-07-15 21:05:03.615758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.980 [2024-07-15 21:05:03.615766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.980 [2024-07-15 21:05:03.619326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.980 [2024-07-15 21:05:03.628535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.980 [2024-07-15 21:05:03.629225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.980 [2024-07-15 21:05:03.629262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.980 [2024-07-15 21:05:03.629272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.980 [2024-07-15 21:05:03.629511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.980 [2024-07-15 21:05:03.629735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.980 [2024-07-15 21:05:03.629748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.980 [2024-07-15 21:05:03.629755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.980 [2024-07-15 21:05:03.633321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.980 [2024-07-15 21:05:03.642338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.980 [2024-07-15 21:05:03.643101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.980 [2024-07-15 21:05:03.643144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.980 [2024-07-15 21:05:03.643157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.980 [2024-07-15 21:05:03.643397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.980 [2024-07-15 21:05:03.643620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.643628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.643635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.647191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.656200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.656954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.656990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.657001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.657249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.657473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.657481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.657488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.661036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.670026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.670668] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.670705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.670716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.670955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.671188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.671197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.671204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.674759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.683968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.684725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.684762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.684773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.685012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.685244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.685253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.685260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.688813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.697815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.698546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.698583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.698593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.698832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.699055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.699064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.699071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.702633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.711629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.712379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.712415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.712426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.712665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.712892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.712905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.712913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.716471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.725467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.726224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.726262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.726273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.726516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.726739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.726747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.726754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.730313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.739317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.740087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.740131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.740142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.740381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.740604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.740613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.740620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.744179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.753190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.753956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.753994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.754004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.754250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.754474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.754482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.754489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.758035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.767030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.767693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.767730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.767741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.767980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.768211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.768220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.768236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.771784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.780989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.781753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.781790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.781800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.782039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.782271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.782280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.782287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.785835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.794829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.795565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.981 [2024-07-15 21:05:03.795602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.981 [2024-07-15 21:05:03.795613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.981 [2024-07-15 21:05:03.795852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.981 [2024-07-15 21:05:03.796075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.981 [2024-07-15 21:05:03.796083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.981 [2024-07-15 21:05:03.796091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.981 [2024-07-15 21:05:03.799647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.981 [2024-07-15 21:05:03.808644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.981 [2024-07-15 21:05:03.809407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.982 [2024-07-15 21:05:03.809444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.982 [2024-07-15 21:05:03.809455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.982 [2024-07-15 21:05:03.809694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.982 [2024-07-15 21:05:03.809917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.982 [2024-07-15 21:05:03.809925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.982 [2024-07-15 21:05:03.809932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.982 [2024-07-15 21:05:03.813491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.982 [2024-07-15 21:05:03.822485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.982 [2024-07-15 21:05:03.823224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.982 [2024-07-15 21:05:03.823265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.982 [2024-07-15 21:05:03.823278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.982 [2024-07-15 21:05:03.823518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.982 [2024-07-15 21:05:03.823741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.982 [2024-07-15 21:05:03.823749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.982 [2024-07-15 21:05:03.823756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.982 [2024-07-15 21:05:03.827316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.982 [2024-07-15 21:05:03.836301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.982 [2024-07-15 21:05:03.837059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.982 [2024-07-15 21:05:03.837096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.982 [2024-07-15 21:05:03.837106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.982 [2024-07-15 21:05:03.837353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.982 [2024-07-15 21:05:03.837576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.982 [2024-07-15 21:05:03.837585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.982 [2024-07-15 21:05:03.837592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.982 [2024-07-15 21:05:03.841149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.982 [2024-07-15 21:05:03.850163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.982 [2024-07-15 21:05:03.850879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.982 [2024-07-15 21:05:03.850916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.982 [2024-07-15 21:05:03.850927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.982 [2024-07-15 21:05:03.851182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.982 [2024-07-15 21:05:03.851406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.982 [2024-07-15 21:05:03.851414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.982 [2024-07-15 21:05:03.851421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.982 [2024-07-15 21:05:03.854974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.982 [2024-07-15 21:05:03.863972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.982 [2024-07-15 21:05:03.864737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.982 [2024-07-15 21:05:03.864774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:28:59.982 [2024-07-15 21:05:03.864785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:28:59.982 [2024-07-15 21:05:03.865024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:28:59.982 [2024-07-15 21:05:03.865260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.982 [2024-07-15 21:05:03.865270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.982 [2024-07-15 21:05:03.865277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.982 [2024-07-15 21:05:03.868835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.244 [2024-07-15 21:05:03.877846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.244 [2024-07-15 21:05:03.878475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.244 [2024-07-15 21:05:03.878512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.244 [2024-07-15 21:05:03.878523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.244 [2024-07-15 21:05:03.878762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.244 [2024-07-15 21:05:03.878985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.244 [2024-07-15 21:05:03.878993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.244 [2024-07-15 21:05:03.879000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.244 [2024-07-15 21:05:03.882564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.244 [2024-07-15 21:05:03.891781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.244 [2024-07-15 21:05:03.892433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.244 [2024-07-15 21:05:03.892452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.244 [2024-07-15 21:05:03.892460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.244 [2024-07-15 21:05:03.892679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.244 [2024-07-15 21:05:03.892898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.244 [2024-07-15 21:05:03.892906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.244 [2024-07-15 21:05:03.892912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.244 [2024-07-15 21:05:03.896461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.244 [2024-07-15 21:05:03.905681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.244 [2024-07-15 21:05:03.906412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.244 [2024-07-15 21:05:03.906448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.244 [2024-07-15 21:05:03.906460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.244 [2024-07-15 21:05:03.906698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.244 [2024-07-15 21:05:03.906921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.244 [2024-07-15 21:05:03.906929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.244 [2024-07-15 21:05:03.906937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.244 [2024-07-15 21:05:03.910501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.244 [2024-07-15 21:05:03.919498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.244 [2024-07-15 21:05:03.920201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.244 [2024-07-15 21:05:03.920239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.244 [2024-07-15 21:05:03.920249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.244 [2024-07-15 21:05:03.920488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.244 [2024-07-15 21:05:03.920711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.244 [2024-07-15 21:05:03.920720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.244 [2024-07-15 21:05:03.920727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.244 [2024-07-15 21:05:03.924284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.244 [2024-07-15 21:05:03.933491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.244 [2024-07-15 21:05:03.934242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.244 [2024-07-15 21:05:03.934279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.244 [2024-07-15 21:05:03.934290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.244 [2024-07-15 21:05:03.934529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:03.934752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:03.934760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:03.934767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:03.938328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:03.947334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:03.948079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:03.948116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:03.948135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:03.948374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:03.948597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:03.948606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:03.948613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:03.952182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:03.961185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:03.961825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:03.961862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:03.961878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:03.962117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:03.962350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:03.962359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:03.962366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:03.965915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:03.975137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:03.975798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:03.975835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:03.975846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:03.976084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:03.976316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:03.976325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:03.976332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:03.979887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:03.989102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:03.989849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:03.989886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:03.989896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:03.990145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:03.990368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:03.990377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:03.990384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:03.993935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:04.002950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:04.003720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:04.003758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:04.003768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:04.004007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:04.004238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:04.004252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:04.004259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:04.007813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:04.016817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:04.017518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:04.017555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:04.017565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:04.017804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:04.018027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:04.018035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:04.018043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:04.021599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:04.030796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:04.031528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:04.031566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:04.031577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:04.031816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:04.032039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:04.032047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:04.032054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:04.035800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:04.044600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:04.045359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:04.045397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:04.045407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:04.045646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.245 [2024-07-15 21:05:04.045869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.245 [2024-07-15 21:05:04.045877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.245 [2024-07-15 21:05:04.045885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.245 [2024-07-15 21:05:04.049444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.245 [2024-07-15 21:05:04.058459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.245 [2024-07-15 21:05:04.059132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.245 [2024-07-15 21:05:04.059169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.245 [2024-07-15 21:05:04.059181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.245 [2024-07-15 21:05:04.059424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.059646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.059654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.059662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.063218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.246 [2024-07-15 21:05:04.072420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.246 [2024-07-15 21:05:04.073148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.246 [2024-07-15 21:05:04.073185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.246 [2024-07-15 21:05:04.073195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.246 [2024-07-15 21:05:04.073434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.073657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.073665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.073673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.077234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.246 [2024-07-15 21:05:04.086228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.246 [2024-07-15 21:05:04.086993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.246 [2024-07-15 21:05:04.087029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.246 [2024-07-15 21:05:04.087040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.246 [2024-07-15 21:05:04.087287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.087512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.087520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.087527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.091076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.246 [2024-07-15 21:05:04.100071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.246 [2024-07-15 21:05:04.100796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.246 [2024-07-15 21:05:04.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.246 [2024-07-15 21:05:04.100844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.246 [2024-07-15 21:05:04.101088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.101322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.101332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.101339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.104893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.246 [2024-07-15 21:05:04.113897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.246 [2024-07-15 21:05:04.114642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.246 [2024-07-15 21:05:04.114680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.246 [2024-07-15 21:05:04.114690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.246 [2024-07-15 21:05:04.114929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.115161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.115171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.115178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.118730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.246 [2024-07-15 21:05:04.127716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.246 [2024-07-15 21:05:04.128493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.246 [2024-07-15 21:05:04.128530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.246 [2024-07-15 21:05:04.128540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.246 [2024-07-15 21:05:04.128779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.246 [2024-07-15 21:05:04.129002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.246 [2024-07-15 21:05:04.129010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.246 [2024-07-15 21:05:04.129018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.246 [2024-07-15 21:05:04.132582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.508 [2024-07-15 21:05:04.141592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.508 [2024-07-15 21:05:04.142270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.508 [2024-07-15 21:05:04.142289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.508 [2024-07-15 21:05:04.142297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.508 [2024-07-15 21:05:04.142516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.508 [2024-07-15 21:05:04.142735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.508 [2024-07-15 21:05:04.142743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.508 [2024-07-15 21:05:04.142755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.508 [2024-07-15 21:05:04.146307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.508 [2024-07-15 21:05:04.155616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.508 [2024-07-15 21:05:04.156383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.508 [2024-07-15 21:05:04.156419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.508 [2024-07-15 21:05:04.156430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.508 [2024-07-15 21:05:04.156669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.508 [2024-07-15 21:05:04.156893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.508 [2024-07-15 21:05:04.156901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.508 [2024-07-15 21:05:04.156908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.508 [2024-07-15 21:05:04.160464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.508 [2024-07-15 21:05:04.169464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.508 [2024-07-15 21:05:04.170235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.508 [2024-07-15 21:05:04.170272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.508 [2024-07-15 21:05:04.170282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.508 [2024-07-15 21:05:04.170521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.508 [2024-07-15 21:05:04.170744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.508 [2024-07-15 21:05:04.170752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.508 [2024-07-15 21:05:04.170759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.508 [2024-07-15 21:05:04.174315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.183312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.183963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.184000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.184011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.184258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.184482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.184490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.184498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.188048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.197248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.197916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.197953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.197964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.198212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.198435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.198443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.198451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.202003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1764137 Killed "${NVMF_APP[@]}" "$@" 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.509 [2024-07-15 21:05:04.211216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.211934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.211971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.211982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.212228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.212451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.212460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.212467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1765841 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1765841 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1765841 ']' 00:29:00.509 [2024-07-15 21:05:04.216016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.509 21:05:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.509 [2024-07-15 21:05:04.225024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.225760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.225803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.225814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.226055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.226287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.226297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.226304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.229857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.238880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.239554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.239572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.239580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.239800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.240019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.240027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.240034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.243609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.252873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.253509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.253525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.253532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.253751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.253970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.253977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.253984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.257544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.266758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.267483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.509 [2024-07-15 21:05:04.267520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.509 [2024-07-15 21:05:04.267532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.509 [2024-07-15 21:05:04.267776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.509 [2024-07-15 21:05:04.267974] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:29:00.509 [2024-07-15 21:05:04.268004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.509 [2024-07-15 21:05:04.268015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.509 [2024-07-15 21:05:04.268023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.509 [2024-07-15 21:05:04.268026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.509 [2024-07-15 21:05:04.271581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.509 [2024-07-15 21:05:04.280608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.509 [2024-07-15 21:05:04.281398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.281436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.281449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.281692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.281915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.281923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.281931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.285487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.294486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.295204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.295242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.295254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.295497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.295721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.295729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.295737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.299299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.510 [2024-07-15 21:05:04.308305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.308787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.308805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.308813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.309033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.309262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.309270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.309277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.312822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.322232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.322968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.323004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.323015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.323262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.323486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.323494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.323502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.327143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.336153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.336923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.336961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.336972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.337218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.337442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.337451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.337458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.341010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.350007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.350671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.350689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.350697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.350917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.351141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.351149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.351156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.352425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.510 [2024-07-15 21:05:04.354716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.363918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.364472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.364511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.364525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.364770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.364994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.365003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.365010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.368569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.377773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.378517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.378555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.378566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.378806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.379029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.379038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.379046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.382611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.510 [2024-07-15 21:05:04.391614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.510 [2024-07-15 21:05:04.392246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.510 [2024-07-15 21:05:04.392284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.510 [2024-07-15 21:05:04.392297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.510 [2024-07-15 21:05:04.392538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.510 [2024-07-15 21:05:04.392761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.510 [2024-07-15 21:05:04.392770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.510 [2024-07-15 21:05:04.392777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.510 [2024-07-15 21:05:04.396335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.771 [2024-07-15 21:05:04.405543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.771 [2024-07-15 21:05:04.405858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.771 [2024-07-15 21:05:04.405886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.771 [2024-07-15 21:05:04.405893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.771 [2024-07-15 21:05:04.405899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.771 [2024-07-15 21:05:04.405903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.771 [2024-07-15 21:05:04.406126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.771 [2024-07-15 21:05:04.406327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.771 [2024-07-15 21:05:04.406250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.771 [2024-07-15 21:05:04.406363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.771 [2024-07-15 21:05:04.406375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.771 [2024-07-15 21:05:04.406369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.771 [2024-07-15 21:05:04.406618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.771 [2024-07-15 21:05:04.406841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.771 [2024-07-15 21:05:04.406850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.771 [2024-07-15 21:05:04.406858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.771 [2024-07-15 21:05:04.410422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.771 [2024-07-15 21:05:04.419433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.771 [2024-07-15 21:05:04.420229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.771 [2024-07-15 21:05:04.420268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.771 [2024-07-15 21:05:04.420281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.771 [2024-07-15 21:05:04.420525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.771 [2024-07-15 21:05:04.420748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.771 [2024-07-15 21:05:04.420757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.771 [2024-07-15 21:05:04.420765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.771 [2024-07-15 21:05:04.424321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.771 [2024-07-15 21:05:04.433323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.434030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.434049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.434057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.434282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.434502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.434511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.434523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.438070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.447281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.448078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.448116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.448136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.448380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.448603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.448612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.448620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.452182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.461280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.461930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.461948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.461956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.462182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.462402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.462410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.462417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.465963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.475167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.475911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.475948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.475959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.476206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.476431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.476439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.476447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.479999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.489011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.489802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.489848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.489859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.490098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.490331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.490340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.490348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.493893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.502898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.503650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.503687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.503698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.503937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.504168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.504178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.504186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.507794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.516808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.517484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.517503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.517511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.517731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.517950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.517958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.517965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.521515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.530718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.531392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.531408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.531416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.531635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.531859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.531867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.531874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.535418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.772 [2024-07-15 21:05:04.544618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.772 [2024-07-15 21:05:04.545396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.772 [2024-07-15 21:05:04.545432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.772 [2024-07-15 21:05:04.545443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.772 [2024-07-15 21:05:04.545682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.772 [2024-07-15 21:05:04.545906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.772 [2024-07-15 21:05:04.545914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.772 [2024-07-15 21:05:04.545922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.772 [2024-07-15 21:05:04.549476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.558494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.559183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.559202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.559210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.559430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.559650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.559657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.559664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.563211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.572420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.573105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.573120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.573133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.573352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.573571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.573578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.573585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.577133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.586348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.587023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.587038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.587046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.587269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.587488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.587496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.587502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.591041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.600249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.600885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.600899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.600906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.601130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.601350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.601358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.601364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.604905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.614108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.614882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.614920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.614931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.615177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.615401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.615410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.615417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.618970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.627974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.628732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.628769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.628785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.629024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.629254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.629263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.629270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.632821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.641811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.642464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.642483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.642490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.642710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.642929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.642936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.642943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.646497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.773 [2024-07-15 21:05:04.655717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.773 [2024-07-15 21:05:04.656343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.773 [2024-07-15 21:05:04.656360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:00.773 [2024-07-15 21:05:04.656367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:00.773 [2024-07-15 21:05:04.656586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:00.773 [2024-07-15 21:05:04.656809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.773 [2024-07-15 21:05:04.656816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.773 [2024-07-15 21:05:04.656823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.773 [2024-07-15 21:05:04.660373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.034 [2024-07-15 21:05:04.669569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.034 [2024-07-15 21:05:04.670244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.034 [2024-07-15 21:05:04.670260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.034 [2024-07-15 21:05:04.670267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.670486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.670704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.670717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.670724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.674274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.683479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.684155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.684170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.684177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.684396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.684614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.684621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.684628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.688176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.697377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.697920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.697934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.697942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.698164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.698383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.698391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.698398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.701942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.711359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.711993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.712008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.712015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.712238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.712458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.712465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.712472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.716018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.725236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.725886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.725901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.725908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.726130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.726350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.726357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.726364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.729908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.739114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.739838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.739876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.739886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.740134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.740357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.740366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.740373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.743925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.752938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.753633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.753651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.753659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.753878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.754097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.754104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.754111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.757660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.766864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.767523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.767539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.767546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.767770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.767989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.767996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.768003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.771547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.780752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.781395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.781412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.781420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.781639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.781857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.781864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.781871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.785426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.794632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.795202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.795239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.795251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.795494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.795717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.795725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.795733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.799290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.808500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.809320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.809357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.809368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.809607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.809830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.809838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.809850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.813408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.822410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.823137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.035 [2024-07-15 21:05:04.823174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.035 [2024-07-15 21:05:04.823186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.035 [2024-07-15 21:05:04.823427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.035 [2024-07-15 21:05:04.823651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.035 [2024-07-15 21:05:04.823661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.035 [2024-07-15 21:05:04.823668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.035 [2024-07-15 21:05:04.827224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.035 [2024-07-15 21:05:04.836218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.035 [2024-07-15 21:05:04.836908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.836926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.836933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.837158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.837377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.837385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.837392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.840965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.850170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.850851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.850866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.850873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.851092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.851315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.851323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.851330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.854883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.864084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.864629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.864644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.864651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.864869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.865088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.865096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.865103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.868662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.878082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.878714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.878730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.878737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.878955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.879178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.879196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.879203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.882750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.891967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.892586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.892624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.892636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.892879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.893102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.893110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.893117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.896679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.905882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.906527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.906545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.906553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.906772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.906997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.907005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.907012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.910561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.036 [2024-07-15 21:05:04.919771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.036 [2024-07-15 21:05:04.920498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.036 [2024-07-15 21:05:04.920536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.036 [2024-07-15 21:05:04.920547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.036 [2024-07-15 21:05:04.920785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.036 [2024-07-15 21:05:04.921009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.036 [2024-07-15 21:05:04.921018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.036 [2024-07-15 21:05:04.921026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.036 [2024-07-15 21:05:04.924588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:04.933596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:04.934019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:04.934037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:04.934045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:04.934269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.297 [2024-07-15 21:05:04.934489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.297 [2024-07-15 21:05:04.934497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.297 [2024-07-15 21:05:04.934503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.297 [2024-07-15 21:05:04.938049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:04.947478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:04.948148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:04.948165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:04.948173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:04.948393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.297 [2024-07-15 21:05:04.948612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.297 [2024-07-15 21:05:04.948620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.297 [2024-07-15 21:05:04.948627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.297 [2024-07-15 21:05:04.952198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:04.961414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:04.962092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:04.962107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:04.962115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:04.962338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.297 [2024-07-15 21:05:04.962557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.297 [2024-07-15 21:05:04.962566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.297 [2024-07-15 21:05:04.962573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.297 [2024-07-15 21:05:04.966116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:04.975331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:04.975982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:04.975997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:04.976005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:04.976230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.297 [2024-07-15 21:05:04.976450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.297 [2024-07-15 21:05:04.976457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.297 [2024-07-15 21:05:04.976464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.297 [2024-07-15 21:05:04.980005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:04.989216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:04.989941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:04.989977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:04.989988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:04.990236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.297 [2024-07-15 21:05:04.990460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.297 [2024-07-15 21:05:04.990469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.297 [2024-07-15 21:05:04.990476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.297 [2024-07-15 21:05:04.994031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.297 [2024-07-15 21:05:05.003030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.297 [2024-07-15 21:05:05.003777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.297 [2024-07-15 21:05:05.003820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.297 [2024-07-15 21:05:05.003831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.297 [2024-07-15 21:05:05.004070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.004302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.004311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.004319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.007869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.016881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.017635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.017673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.017684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.017922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.018154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.018163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.018170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.021721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.030733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.031495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.031532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.031543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.031782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.032005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.032013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.032020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.298 [2024-07-15 21:05:05.035797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.298 [2024-07-15 21:05:05.044604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.045411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.045449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.045464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.045703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.045926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.045935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.045942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.049497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.058516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.059176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.059201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.059210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.059434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.059655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.059663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.059671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.063221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.072423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.073103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.073119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.073131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.073351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.073570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.073578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.073585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.298 [2024-07-15 21:05:05.077135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.080916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:01.298 [2024-07-15 21:05:05.086340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.298 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.298 [2024-07-15 21:05:05.086979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.086994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.087002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.087227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.087447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.087455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.087462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.091001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.298 [2024-07-15 21:05:05.100194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.298 [2024-07-15 21:05:05.100839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.298 [2024-07-15 21:05:05.100877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.298 [2024-07-15 21:05:05.100887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.298 [2024-07-15 21:05:05.101134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.298 [2024-07-15 21:05:05.101358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.298 [2024-07-15 21:05:05.101366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.298 [2024-07-15 21:05:05.101374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.298 [2024-07-15 21:05:05.104919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.299 [2024-07-15 21:05:05.114134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.299 Malloc0 00:29:01.299 [2024-07-15 21:05:05.114931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.299 [2024-07-15 21:05:05.114968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.299 [2024-07-15 21:05:05.114979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.299 [2024-07-15 21:05:05.115226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.299 [2024-07-15 21:05:05.115450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.299 [2024-07-15 21:05:05.115459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.299 [2024-07-15 21:05:05.115467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.299 [2024-07-15 21:05:05.119013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.299 [2024-07-15 21:05:05.128012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.299 [2024-07-15 21:05:05.128642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.299 [2024-07-15 21:05:05.128679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.299 [2024-07-15 21:05:05.128691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.299 [2024-07-15 21:05:05.128930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.299 [2024-07-15 21:05:05.129161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.299 [2024-07-15 21:05:05.129170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.299 [2024-07-15 21:05:05.129178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.299 [2024-07-15 21:05:05.132724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.299 [2024-07-15 21:05:05.141945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.299 [2024-07-15 21:05:05.142726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.299 [2024-07-15 21:05:05.142763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f063b0 with addr=10.0.0.2, port=4420 00:29:01.299 [2024-07-15 21:05:05.142775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f063b0 is same with the state(5) to be set 00:29:01.299 [2024-07-15 21:05:05.143014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f063b0 (9): Bad file descriptor 00:29:01.299 [2024-07-15 21:05:05.143245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.299 [2024-07-15 21:05:05.143254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.299 [2024-07-15 21:05:05.143262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.299 [2024-07-15 21:05:05.146016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.299 [2024-07-15 21:05:05.146814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.299 21:05:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1764820 00:29:01.299 [2024-07-15 21:05:05.155829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.560 [2024-07-15 21:05:05.191545] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:09.695 00:29:09.695 Latency(us) 00:29:09.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.695 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.695 Verification LBA range: start 0x0 length 0x4000 00:29:09.695 Nvme1n1 : 15.01 8419.14 32.89 9642.14 0.00 7061.69 1085.44 18350.08 00:29:09.695 =================================================================================================================== 00:29:09.695 Total : 8419.14 32.89 9642.14 0.00 7061.69 1085.44 18350.08 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.955 rmmod nvme_tcp 00:29:09.955 rmmod nvme_fabrics 00:29:09.955 rmmod nvme_keyring 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1765841 ']' 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1765841 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1765841 ']' 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1765841 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1765841 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1765841' 00:29:09.955 killing process with pid 1765841 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1765841 00:29:09.955 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1765841 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.216 21:05:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.761 21:05:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.761 00:29:12.761 real 0m27.494s 00:29:12.761 user 1m2.781s 00:29:12.761 sys 0m6.884s 00:29:12.761 21:05:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.761 21:05:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:12.761 ************************************ 00:29:12.761 END TEST nvmf_bdevperf 00:29:12.761 ************************************ 00:29:12.761 21:05:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.761 21:05:16 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:12.761 21:05:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.761 21:05:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.761 21:05:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.761 ************************************ 00:29:12.761 START TEST nvmf_target_disconnect 00:29:12.761 ************************************ 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:12.761 * Looking for test storage... 00:29:12.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.761 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.762 21:05:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:19.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:19.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:19.401 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:19.402 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:19.402 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.402 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:29:19.663 00:29:19.663 --- 10.0.0.2 ping statistics --- 00:29:19.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.663 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:29:19.663 00:29:19.663 --- 10.0.0.1 ping statistics --- 00:29:19.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.663 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.663 ************************************ 00:29:19.663 START TEST nvmf_target_disconnect_tc1 00:29:19.663 ************************************ 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:19.663 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.924 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.924 [2024-07-15 21:05:23.607437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.925 [2024-07-15 21:05:23.607500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1584e20 with addr=10.0.0.2, port=4420 00:29:19.925 [2024-07-15 21:05:23.607530] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:19.925 [2024-07-15 21:05:23.607545] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:19.925 [2024-07-15 21:05:23.607552] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:19.925 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:19.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:19.925 Initializing NVMe Controllers 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:19.925 00:29:19.925 real 0m0.110s 00:29:19.925 user 0m0.050s 00:29:19.925 sys 0m0.061s 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.925 ************************************ 00:29:19.925 END TEST nvmf_target_disconnect_tc1 00:29:19.925 ************************************ 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.925 ************************************ 00:29:19.925 START TEST nvmf_target_disconnect_tc2 00:29:19.925 ************************************ 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1771880 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1771880 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1771880 ']' 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.925 21:05:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.925 [2024-07-15 21:05:23.761142] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:29:19.925 [2024-07-15 21:05:23.761203] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.925 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.186 [2024-07-15 21:05:23.849637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.186 [2024-07-15 21:05:23.944150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.186 [2024-07-15 21:05:23.944205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.186 [2024-07-15 21:05:23.944213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.186 [2024-07-15 21:05:23.944220] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.186 [2024-07-15 21:05:23.944226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.186 [2024-07-15 21:05:23.944816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:20.186 [2024-07-15 21:05:23.944950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:20.186 [2024-07-15 21:05:23.945118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:20.186 [2024-07-15 21:05:23.945172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.758 Malloc0 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.758 [2024-07-15 21:05:24.630641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.758 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 [2024-07-15 21:05:24.670976] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1772202 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:21.018 21:05:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.018 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.939 21:05:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1771880 00:29:22.939 21:05:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Read completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 Write completed with error (sct=0, sc=8) 00:29:22.939 starting I/O failed 00:29:22.939 [2024-07-15 21:05:26.703690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:22.939 [2024-07-15 21:05:26.704384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.704421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.704904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.704917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.705434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.705471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.705907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.705919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.706459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.706495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.706927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.706939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.707468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.707504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.707844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.707856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.708417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.708454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.708830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.708842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.709373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.709409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.709842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.710359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.710396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.710725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.711076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.711085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.711398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.711410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.711817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.711827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.712143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.712153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.939 [2024-07-15 21:05:26.712546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.939 [2024-07-15 21:05:26.712555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.939 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.712977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.712986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.713415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.713424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.713887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.713902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.714427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.714463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.714696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.714710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.715094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.715106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.715531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.715542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.715928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.715938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.716456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.716494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.716926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.717455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.717492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.717918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.717931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.718428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.718465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.718900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.719369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.719405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.719844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.719856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.720363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.720400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.720830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.720842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.721427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.721464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.721856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.721869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.722306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.722316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.722657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.722668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.722979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.722989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.723273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.723662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.723673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.724103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.724113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.724428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.724439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.724750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.724760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.725203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.725214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.725616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.725627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.725982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.725992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.726399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.726410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.726851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.726861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.727390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.727430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.727863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.727878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.728177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.728190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.728584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.728597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.729034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.729046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.729456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.729469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.940 [2024-07-15 21:05:26.729932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.940 [2024-07-15 21:05:26.729944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.940 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.730361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.730374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.730804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.730816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.731349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.731397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.731815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.731830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.732210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.732223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.732472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.732487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.732885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.732897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.733282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.733294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.733731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.733743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.734133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.734146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.734449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.734462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.734681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.734695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.735137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.735150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.735425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.735437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.735836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.735848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.736229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.736241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.736654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.736666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.737053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.737065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.737295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.737307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.737725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.737736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.738127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.738139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.738464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.738476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.738787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.738798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.739224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.739237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.739667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.739679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.740061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.740073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.740510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.740522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.740909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.740921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.741314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.741331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.741603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.741625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.742058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.742074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.742531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.742548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.742974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.742990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.743372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.743389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.743760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.941 [2024-07-15 21:05:26.743776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.941 qpair failed and we were unable to recover it. 00:29:22.941 [2024-07-15 21:05:26.744223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.744240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.744659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.744675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.744928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.744946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.745378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.745395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.745774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.745791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.746186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.746202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.746639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.746655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.747039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.747059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.747526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.747543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.747777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.747794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.748182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.748199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.748599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.748615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.749037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.749053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.749485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.749501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.749781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.749798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.750211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.750228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.750651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.750667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.751070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.751086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.751515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.751532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.751932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.751948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.752234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.752250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.752688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.752704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.753002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.753019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.753415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.753431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.753787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.753802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.754208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.754230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.754398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.754422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.754871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.754891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.755333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.755354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.755682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.755702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.756140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.756161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.756574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.756595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.757020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.757040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.757507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.757528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.757966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.757987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.758448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.758469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.758908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.758928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.759366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.759387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.759815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.759835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.760237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.760259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.942 [2024-07-15 21:05:26.760686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.942 [2024-07-15 21:05:26.760706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.942 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.761108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.761137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.761565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.761586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.762073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.762094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.762522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.762972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.762992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.763419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.763441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.763820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.763845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.764282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.764304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.764772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.764793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.765223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.765253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.765652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.765680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.766130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.766160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.766599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.766627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.767049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.767077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.767542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.767571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.768019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.768048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.768515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.768545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.768986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.769014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.769495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.769524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.769953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.769981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.770450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.770479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.770931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.770958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.771418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.771447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.771865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.771892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.772418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.772505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.773040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.773075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.773535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.773566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.773993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.943 [2024-07-15 21:05:26.774021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.943 qpair failed and we were unable to recover it. 00:29:22.943 [2024-07-15 21:05:26.774368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.774402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.774743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.774777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.775232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.775263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.775708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.775736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.776161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.776191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.776649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.777106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.777144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.777568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.777596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.777916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.777947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.778398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.778428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.778857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.778885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.779355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.779384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.779787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.779815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.780244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.780274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.780718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.780745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.781172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.781202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.781620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.781648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.781981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.782008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.782316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.782355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.782633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.782662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.783106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.783144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.783622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.783649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.784089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.784116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.784580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.784610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.785024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.785052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.785485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.785515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.785842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.785876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.786285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.786315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.786748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.786776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.787203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.787231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.787645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.787672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.788132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.788162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.788612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.788640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.789070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.789098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.789600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.790075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.790102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.790544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.790573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.791018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.791045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.791506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.791536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.791963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.791991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.792442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.944 [2024-07-15 21:05:26.792471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.944 qpair failed and we were unable to recover it. 00:29:22.944 [2024-07-15 21:05:26.792896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.792924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.793347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.793376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.793806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.793833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.794271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.794300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.794727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.794755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.795236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.795266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.795595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.795627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.796068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.796096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.796600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.796628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.796966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.796996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.797412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.797440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.797783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.797813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.798279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.798308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.798762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.798789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.799230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.799260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.799704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.799731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.800089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.800116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.800584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.800619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.801063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.801091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.801550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.801580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.802088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.802115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.802561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.802589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.803019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.803047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.803486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.803516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.803971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.803999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.804441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.804470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.804912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.804939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.805361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.805389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.805825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.805852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.806393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.806480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.806847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.806882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.807343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.807376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.807810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.807839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.808292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.808322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.808769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.808797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.809264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.809293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.945 qpair failed and we were unable to recover it. 00:29:22.945 [2024-07-15 21:05:26.809731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.945 [2024-07-15 21:05:26.809759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.810188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.810217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.810660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.810687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.811133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.811163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.811611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.811640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.812083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.812613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.812642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.813014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.813503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.813535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.813966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.813993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.814446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.814474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.814980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.815007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.815479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.815508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.815956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.815984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.816340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.816806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.816834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.817286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.817316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.817759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.817786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.818243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.818271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.818761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.818789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.819206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.819234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.819554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.819581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.820030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.820058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.820497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.820526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.820954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.820981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.821410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.821440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.821926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.821953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.822420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.822449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.822889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.822916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.823263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.823312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.823751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.823779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:22.946 [2024-07-15 21:05:26.824242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.946 [2024-07-15 21:05:26.824271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:22.946 qpair failed and we were unable to recover it. 00:29:23.216 [2024-07-15 21:05:26.824716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.216 [2024-07-15 21:05:26.824744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.216 qpair failed and we were unable to recover it. 00:29:23.216 [2024-07-15 21:05:26.825193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.216 [2024-07-15 21:05:26.825222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.216 qpair failed and we were unable to recover it. 00:29:23.216 [2024-07-15 21:05:26.825565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.216 [2024-07-15 21:05:26.825593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.216 qpair failed and we were unable to recover it. 00:29:23.216 [2024-07-15 21:05:26.826032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.216 [2024-07-15 21:05:26.826061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.216 qpair failed and we were unable to recover it. 00:29:23.216 [2024-07-15 21:05:26.826486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.826515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.826948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.826976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.827403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.827432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.827857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.827884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.828262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.828290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.828649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.828676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.829116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.829172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.829629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.829656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.830084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.830111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.830541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.830569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.830997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.831025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.831471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.831500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.831941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.831975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.832379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.832408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.832835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.832863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.833313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.833342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.833803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.833830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.834179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.834208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.834642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.834669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.835119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.835158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.835527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.835555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.836002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.836029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.836498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.836528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.836972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.836999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.837417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.837446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.837873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.837900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.838349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.838378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.838880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.838907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.839335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.839363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.839776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.839804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.840145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.840174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.840657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.840684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.841003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.841031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.841456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.841486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.841931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.841959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.842489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.842577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.843105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.843157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.843636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.843665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.844115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.844157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.844648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.845144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.217 [2024-07-15 21:05:26.845175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.217 qpair failed and we were unable to recover it. 00:29:23.217 [2024-07-15 21:05:26.845614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.845643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.846097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.846137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.846660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.846749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.847402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.847990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.848025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.848451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.848484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.848920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.848949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.849453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.849541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.850090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.850140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.850604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.850634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.851056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.851085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.851541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.851582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.852015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.852042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.852518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.852549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.852959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.852987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.853292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.853322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.853796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.853824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.854273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.854304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.854783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.854811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.855242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.855272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.855718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.855746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.856191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.856220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.856721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.856749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.857174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.857203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.857657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.858111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.858149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.858603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.859047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.859075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.859407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.859436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.859812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.859839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.860293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.860323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.860764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.860791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.861198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.861227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.861674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.861702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.862025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.862052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.862517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.862545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.862998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.863026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.863399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.863428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.863854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.863883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.864235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.218 [2024-07-15 21:05:26.864274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.218 qpair failed and we were unable to recover it. 00:29:23.218 [2024-07-15 21:05:26.864718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.864747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.865176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.865205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.865631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.865660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.866087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.866115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.866559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.866588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.866925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.866953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.867264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.867296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.867732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.867760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.868202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.868232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.868653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.869101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.869137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.869556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.869591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.870032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.870060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.870522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.870551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.871038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.871065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.871521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.871549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.871994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.872021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.872370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.872400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.872864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.872893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.873350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.873379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.873713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.873741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.874189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.874218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.874650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.874678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.875130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.875642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.875669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.876171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.876202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.876627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.876655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.877108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.877152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.877534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.877562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.877976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.878004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.878475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.878504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.878846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.878874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.879378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.879407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.879834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.879862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.880250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.880279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.880709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.880736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.881176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.881205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.881642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.881669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.882149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.882178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.882638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.882666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.219 [2024-07-15 21:05:26.883134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.219 [2024-07-15 21:05:26.883163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.219 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.883588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.883616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.883992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.884019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.884503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.884533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.884976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.885004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.885445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.885475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.885917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.885945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.886375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.886404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.886829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.886857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.887287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.887316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.887755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.887782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.888212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.888247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.888710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.888738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.889175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.889204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.889632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.889660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.890085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.890112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.890582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.890611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.891061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.891089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.891542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.891571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.892025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.892052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.892582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.892613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.893021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.893049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.893491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.893521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.893982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.894012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.894457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.894485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.894968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.894996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.895330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.895363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.895807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.895834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.896315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.896344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.896662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.896693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.897156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.897185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.897522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.897549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.898023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.898050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.898522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.898551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.899036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.899064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.899527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.899555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.899981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.900009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.900448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.900478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.900929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.900957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.901330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.901359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.901818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.901845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.220 qpair failed and we were unable to recover it. 00:29:23.220 [2024-07-15 21:05:26.902275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.220 [2024-07-15 21:05:26.902304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.902748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.902775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.903199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.903227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.903753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.903781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.904244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.904742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.904770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.905231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.905728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.905756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.906207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.906235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.906662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.906689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.907056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.907095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.907547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.907577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.907917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.907950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.908388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.908417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.908864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.908892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.909228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.909259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.909685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.909712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.910158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.910187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.910553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.910588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.910963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.910990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.911458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.911951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.911979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.912411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.912440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.912823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.913279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.913308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.913734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.913762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.914207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.914236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.914673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.914700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.915160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.915188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.915650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.915677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.916145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.916174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.916688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.916715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.917141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.917170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.221 qpair failed and we were unable to recover it. 00:29:23.221 [2024-07-15 21:05:26.917614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.221 [2024-07-15 21:05:26.917642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.918075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.918102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.918604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.918633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.919071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.919098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.919548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.919578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.920011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.920038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.920507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.920536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.920963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.920991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.921449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.921478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.921815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.921844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.922310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.922339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.922793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.922820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.923273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.923302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.923730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.923757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.924202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.924232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.924664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.924692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.925113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.925151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.925569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.925603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.925932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.925959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.926399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.926428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.926875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.926903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.927496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.927586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.928084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.928119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.928590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.928621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.929053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.929083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.929458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.929487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.929815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.929848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.930286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.930316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.930643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.930671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.931155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.931184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.931651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.931679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.932141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.932172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.932623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.932651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.933085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.933112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.933574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.933602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.934096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.934134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.934622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.934650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.935115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.935154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.935612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.935643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.936101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.936150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.936604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.222 [2024-07-15 21:05:26.936633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.222 qpair failed and we were unable to recover it. 00:29:23.222 [2024-07-15 21:05:26.937085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.937113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.937463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.937492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.937941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.937969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.938510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.938601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.939150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.939187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.939643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.939672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.940145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.940177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.940609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.940637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.941073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.941101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.941613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.941705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.942118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.942178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.942651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.942681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.943106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.943148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.943567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.943595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.944026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.944054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.944479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.944571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.945000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.945051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.945602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.945635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.946059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.946088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.946519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.946551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.946978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.947006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.947446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.947477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.947822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.947849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.948314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.948344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.948682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.948711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.949168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.949197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.949626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.949654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.950138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.950167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.950528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.950557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.950885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.950917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.951357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.951388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.951851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.951879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.952309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.952339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.952786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.952814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.953252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.953280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.953626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.953655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.954144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.954173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.954659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.954687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.955117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.955156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.955518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.955551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.955982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.223 [2024-07-15 21:05:26.956010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.223 qpair failed and we were unable to recover it. 00:29:23.223 [2024-07-15 21:05:26.956487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.956516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.956965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.956994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.957436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.957467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.957896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.957924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.958360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.958846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.958874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.959335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.959364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.959791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.959819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.960146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.960175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.960633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.960661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.961105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.961150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.961605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.961632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.962066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.962094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.962535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.962565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.963031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.963058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.963616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.963717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.964329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.964420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.964954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.964989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.965426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.965457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.965942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.965971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.966423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.966454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.966907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.967380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.967410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.967841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.967869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.968332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.968363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.968810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.968839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.969299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.969329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.969771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.969799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.970250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.970279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.970771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.970800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.971146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.971608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.971636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.972066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.972093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.972533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.972563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.973010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.973037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.973509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.973540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.973970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.973998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.974440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.974470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.974908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.974936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.975372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.975401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.975886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.224 [2024-07-15 21:05:26.975913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.224 qpair failed and we were unable to recover it. 00:29:23.224 [2024-07-15 21:05:26.976345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.976374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.976831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.976860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.977300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.977330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.977758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.977786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.978219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.978248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.978700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.978727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.979235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.979264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.979611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.979638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.979967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.979994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.980443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.980472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.980745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.980772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.981205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.981235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.981683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.981711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.982167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.982196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.982644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.982678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.983111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.983149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.983601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.983628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.984086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.984114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.984612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.984642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.985071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.985100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.985532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.985562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.985882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.985910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.986329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.986360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.986814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.986842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.987308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.987338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.987758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.987785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.988235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.988264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.988689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.989146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.989176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.989621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.989648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.990114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.990155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.990619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.990647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.991098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.991137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.991586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.225 [2024-07-15 21:05:26.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.225 qpair failed and we were unable to recover it. 00:29:23.225 [2024-07-15 21:05:26.992047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.992075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.992546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.992575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.993040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.993068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.993394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.993424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.993899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.993928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.994379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.994409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.994913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.994941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.995476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.995569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.996104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.996157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.996604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.996633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.997082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.997450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.997485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.997940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.997969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.998465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.998495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.998925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.998954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.999384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.999415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:26.999869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:26.999898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.000438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.000531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.001040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.001075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.001565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.001597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.002031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.002077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.002597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.002628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.003077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.003452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.003487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.003839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.003866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.004292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.004322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.004753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.004781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.005243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.005273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.005719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.005746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.006204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.006232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.006691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.006718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.007170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.007198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.007705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.007733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.008171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.008199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.008651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.008679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.009119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.009170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.009557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.009585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.010038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.010065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.010499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.010527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.010956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.010984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.011519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.226 [2024-07-15 21:05:27.011547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.226 qpair failed and we were unable to recover it. 00:29:23.226 [2024-07-15 21:05:27.011980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.012008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.012488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.012518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.012979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.013006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.013495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.013523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.013955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.013983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.014443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.014472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.014897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.015402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.015432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.015860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.015888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.016318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.016348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.016810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.016837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.017281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.017310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.017742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.017769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.018218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.018247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.018687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.018715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.019220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.019249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.019690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.019718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.020086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.020115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.020550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.020578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.021012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.021046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.021457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.021487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.021815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.021843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.022273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.022301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.022772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.023231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.023261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.023722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.023750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.024135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.024163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.024625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.024652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.025083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.025111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.025508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.025536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.025988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.026016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.026457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.026486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.026824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.026862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.027329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.027359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.027815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.027842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.028298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.028326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.028665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.029140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.029169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.029632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.029659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.030099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.030162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.030590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.227 [2024-07-15 21:05:27.030619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.227 qpair failed and we were unable to recover it. 00:29:23.227 [2024-07-15 21:05:27.031050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.031542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.031573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.032007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.032036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.032380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.032409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.032839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.032867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.033224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.033254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.033720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.033748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.034214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.034243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.034587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.034613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.034929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.034961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.035426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.035455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.035902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.036305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.036335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.036772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.036799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.037247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.037278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.037742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.037771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.038232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.038261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.038743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.038771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.039112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.039158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.039591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.039619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.040050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.040078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.040403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.040433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.040768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.040798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.041256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.041286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.041628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.041657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.042079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.042107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.042552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.042580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.042921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.042953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.043429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.043457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.043907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.043935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.044408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.044437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.044891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.044918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.045354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.045450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.045953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.045988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.046445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.046477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.046933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.046962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.047358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.047387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.047714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.047742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.048196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.048226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.048663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.048692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.049087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.228 [2024-07-15 21:05:27.049114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.228 qpair failed and we were unable to recover it. 00:29:23.228 [2024-07-15 21:05:27.049469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.049509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.049838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.049866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.050301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.050332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.050772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.050799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.051180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.051212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.051670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.051698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.052038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.052071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.052545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.052574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.053025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.053053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.053532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.053561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.053996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.054024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.054423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.054453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.054819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.054847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.055309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.055762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.055789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.056229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.056258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.056698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.056725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.057198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.057226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.057667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.057695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.058157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.058188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.058661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.058690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.059134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.059164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.059623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.059651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.060104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.060143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.060574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.060602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.060942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.060969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.061394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.061423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.061859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.061887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.062340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.062368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.062814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.062842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.063290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.063319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.063755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.063782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.064239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.064268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.064724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.064752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.065222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.229 [2024-07-15 21:05:27.065251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.229 qpair failed and we were unable to recover it. 00:29:23.229 [2024-07-15 21:05:27.065710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.065737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.066190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.066221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.066675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.066702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.067142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.067170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.067646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.068135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.068164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.068588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.068616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.069050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.069076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.069526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.069555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.069982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.070017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.070448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.070478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.070937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.070965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.071422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.071450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.071874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.071904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.072406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.072435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.072831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.073311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.073339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.073793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.073821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.074189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.074219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.074574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.074607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.075060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.075088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.075537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.075566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.075953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.075981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.076441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.076904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.076932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.077375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.077403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.077828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.077855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.078297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.078326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.078767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.079248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.079277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.079723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.079751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.080082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.080109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.080577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.080606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.081071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.081099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.081564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.081593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.081923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.081956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.082363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.082393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.082860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.082888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.083333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.083362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.083704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.083731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.230 [2024-07-15 21:05:27.084208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.230 [2024-07-15 21:05:27.084237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.230 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.084691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.084718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.085157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.085186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.085616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.085644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.086085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.086113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.086556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.086583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.086924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.086951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.087290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.087323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.087842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.087870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.088301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.088802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.088830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.089298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.089327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.089684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.089712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.090140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.090168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.090685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.090713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.091196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.091243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.091598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.091629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.092085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.092113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.092576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.092605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.092992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.093020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.093460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.093489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.093824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.093859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.094319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.094348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.094813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.094841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.095301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.095331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.095785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.096315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.096343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.096810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.096838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.097204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.097233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.097680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.097708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.098172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.098201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.098670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.098698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.099156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.099186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.099675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.099704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.231 [2024-07-15 21:05:27.100151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.231 [2024-07-15 21:05:27.100181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.231 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.100641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.100671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.101108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.101623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.101651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.102090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.102117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.102592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.103064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.103092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.103537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.103567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.103946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.103973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.104414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.104444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.104881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.104909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.501 [2024-07-15 21:05:27.105261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.501 [2024-07-15 21:05:27.105300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.501 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.105751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.105779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.106244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.106273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.106777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.106805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.107264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.107301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.107759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.107787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.108223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.108253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.108692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.108720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.109175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.109203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.109678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.109705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.110067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.110094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.110558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.110586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.111047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.111075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.111483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.111513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.111944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.112410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.112440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.112897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.112924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.113353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.113383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.113733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.113761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.114188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.114217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.114679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.114706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.115147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.115175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.115655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.115683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.116134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.116164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.116597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.116625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.117066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.117094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.117442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.117471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.117924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.502 [2024-07-15 21:05:27.117951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.502 qpair failed and we were unable to recover it. 00:29:23.502 [2024-07-15 21:05:27.118405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.118434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.118828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.118856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.119322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.119351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.119830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.119857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.120406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.120503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.121042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.121078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.121546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.121578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.122015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.122044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.122498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.122529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.122966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.122995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.123446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.123475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.123918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.123946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.124405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.124434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.124874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.124902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.125421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.125518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.126049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.126083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.126594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.126638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.127101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.127154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.127660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.127688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.128148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.128179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.128645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.128673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.129109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.129149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.129609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.129637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.130076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.130104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.130581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.130610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.131080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.131108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.131490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.131520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.132003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.503 [2024-07-15 21:05:27.132031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.503 qpair failed and we were unable to recover it. 00:29:23.503 [2024-07-15 21:05:27.132600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.132698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.133161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.133204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.133636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.133667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.134104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.134572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.134600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.135078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.135106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.135538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.135568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.136013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.136041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.136610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.136709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.137322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.137422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.137975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.138456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.138488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.138925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.138954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.139400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.139430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.139836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.139864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.140435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.140536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.141075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.141110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.141491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.141522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.141988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.142016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.142442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.142472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.142929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.143423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.143452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.143896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.143923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.144466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.144565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.145109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.145164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.145669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.145699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.146047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.146075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.504 [2024-07-15 21:05:27.146547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.504 [2024-07-15 21:05:27.146578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.504 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.147039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.147079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.147596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.147626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.148093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.148121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.148576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.148608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.148957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.148985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.149409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.149439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.149881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.149910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.150470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.150570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.151112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.151180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.151669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.151699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.152043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.152071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.152556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.152587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.152942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.152982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.153426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.153457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.153869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.153898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.154384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.154413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.154848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.154876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.155342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.155371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.155822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.155850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.156196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.156230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.156579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.156606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.157090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.157118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.157592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.157620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.158055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.158083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.158557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.158586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.159025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.159053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.505 [2024-07-15 21:05:27.159414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.505 [2024-07-15 21:05:27.159449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.505 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.159870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.159898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.160291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.160320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.160788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.160816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.161293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.161321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.161765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.161793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.162292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.162321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.162762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.162790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.163259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.163289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.163752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.163781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.164248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.164277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.164724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.164751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.165203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.165232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.165679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.165706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.166048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.166083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.166558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.166587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.167038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.167067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.167557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.167587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.167926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.167954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.168446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.168475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.168921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.168950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.169503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.169605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.170201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.170263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.170745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.170775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.171223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.171253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.171719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.171747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.172220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.172250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.172645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.172674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.506 qpair failed and we were unable to recover it. 00:29:23.506 [2024-07-15 21:05:27.173138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.506 [2024-07-15 21:05:27.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.173650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.173679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.174151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.174181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.174536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.174564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.175041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.175068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.175464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.175493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.175959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.175987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.176436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.176466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.176811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.176849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.177318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.177348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.177800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.177828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.178186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.178219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.178705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.178733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.179095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.179138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.179585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.179613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.180060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.180087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.180638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.180668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.181188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.181218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.181675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.181702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.182163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.182191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.182510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.182538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.182969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.182997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.183457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.507 [2024-07-15 21:05:27.183486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.507 qpair failed and we were unable to recover it. 00:29:23.507 [2024-07-15 21:05:27.183933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.183960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.184408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.184437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.184871] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.184898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.185345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.185381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.185845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.185873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.186338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.186367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.186712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.186743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.187201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.187231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.187701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.187729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.188185] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.188214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.188701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.188729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.189182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.189645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.189673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.190099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.190136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.190511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.190552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.191025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.191054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.191601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.191631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.192102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.192140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.192593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.192621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.193072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.193099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.193510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.193539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.193986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.194014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.194501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.194530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.194999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.195027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.195470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.195499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.195950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.195977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.196426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.508 [2024-07-15 21:05:27.196455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.508 qpair failed and we were unable to recover it. 00:29:23.508 [2024-07-15 21:05:27.196842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.196870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.197352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.197381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.197851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.197879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.198326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.198357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.198822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.198850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.199291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.199320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.199762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.199789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.200242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.200270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.200732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.200759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.201296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.201324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.201809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.201837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.202273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.202303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.202761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.202789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.203229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.203258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.203697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.203725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.204182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.204211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.204771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.204804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.205288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.205317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.205797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.205827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.206308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.206338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.206784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.206812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.207330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.207359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.207714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.207743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.208212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.208241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.208680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.208708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.209161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.209189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.209669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.209696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.210150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.210180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.509 qpair failed and we were unable to recover it. 00:29:23.509 [2024-07-15 21:05:27.210600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.509 [2024-07-15 21:05:27.210628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.211161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.211190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.211639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.211667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.212113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.212157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.212571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.212599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.212926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.212954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.213415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.213444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.213884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.213912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.214353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.214383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.214827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.214855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.215308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.215337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.215786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.215813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.216282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.216311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.216762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.216791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.217156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.217185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.217657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.217685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.218119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.218163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.218625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.218653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.219103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.219144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.219619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.219647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.220090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.220118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.220588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.220616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.221062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.221089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.221501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.221530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.221995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.222023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.222514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.222543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.510 qpair failed and we were unable to recover it. 00:29:23.510 [2024-07-15 21:05:27.222993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.510 [2024-07-15 21:05:27.223021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.223410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.223448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.223904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.223939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.224519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.224624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.225068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.225107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.225583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.225615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.226075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.226104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.226561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.226591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.227037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.227066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.227537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.227568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.228031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.228059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.228531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.228563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.228963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.228992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.229464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.229493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.229975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.230003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.230449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.230477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.230949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.230978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.231463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.231492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.231968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.231996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.232465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.232952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.232981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.233451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.233480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.233944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.233973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.234432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.234461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.234927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.234954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.235410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.235440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.235873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.511 [2024-07-15 21:05:27.235901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.511 qpair failed and we were unable to recover it. 00:29:23.511 [2024-07-15 21:05:27.236302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.236332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.236766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.236794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.237273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.237303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.237770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.237798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.238251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.238280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.238752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.238779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.239239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.239267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.239705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.239733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.240181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.240210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.240649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.240676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.241139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.241169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.241624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.241652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.242061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.242088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.242464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.242495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.242973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.243001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.243460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.243496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.244002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.244029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.244523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.244553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.245011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.245039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.245531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.245559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.245980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.246010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.246477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.246506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.246979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.247007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.247329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.247358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.247712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.247740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.248170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.248669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.248697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.249081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.249108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.249585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.249613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.250063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.512 [2024-07-15 21:05:27.250092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.512 qpair failed and we were unable to recover it. 00:29:23.512 [2024-07-15 21:05:27.250621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.250650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.251006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.251037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.251493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.251523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.252003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.252031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.252506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.252535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.253000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.253028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.253483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.253512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.253979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.254006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.254495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.254524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.254971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.254999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.255454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.255483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.255954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.255982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.256466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.256496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.256965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.256993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.257443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.257472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.257915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.257943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.258432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.258461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.258931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.258958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.259414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.259444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.259896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.259924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.260528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.260632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.261171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.261210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.261666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.261695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.262144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.262174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.262618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.262646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.263118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.263181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.263622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.263651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.263999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.264027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.513 qpair failed and we were unable to recover it. 00:29:23.513 [2024-07-15 21:05:27.264459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.513 [2024-07-15 21:05:27.264489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.264957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.264985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.265456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.265485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.265930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.265958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.266432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.266535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.266926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.266962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.267391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.267423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.267886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.267915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.268364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.268394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.268868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.268896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.269353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.269383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.269856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.269885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.270356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.270385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.270768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.270795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.271262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.271292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.271659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.271699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.272153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.272185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.272697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.272725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.273342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.273445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.273974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.274009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.274452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.274483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.274941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.274969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.275422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.275452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.275914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.514 qpair failed and we were unable to recover it. 00:29:23.514 [2024-07-15 21:05:27.276389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.514 [2024-07-15 21:05:27.276422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.276879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.276907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.277373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.277402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.277858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.277886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.278462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.278566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.279087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.279140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.279592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.279621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.280065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.280095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.280559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.280589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.281061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.281090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.281562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.281592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.281948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.281989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.282465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.282497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.282960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.283000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.283458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.283488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.283942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.283970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.284426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.284455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.284921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.284949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.285406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.285434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.285881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.285909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.286361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.286392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.286781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.286809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.287291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.287767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.287795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.288243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.288273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.288742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.288769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.289209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.289237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.515 [2024-07-15 21:05:27.289718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.515 [2024-07-15 21:05:27.289747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.515 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.290193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.290223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.290691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.290719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.291195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.291224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.291695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.291724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.292202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.292699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.292727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.293155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.293184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.293639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.293667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.294112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.294152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.294586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.294613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.295062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.295089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.295479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.295969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.295999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.296438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.296468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.296918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.296946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.297395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.297423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.297869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.297896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.298426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.298455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.298930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.298957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.299504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.299606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.300177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.300218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.300672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.300704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.301174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.301203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.301729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.301757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.302112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.302165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.302567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.302607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.303146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.303177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.303640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.516 qpair failed and we were unable to recover it. 00:29:23.516 [2024-07-15 21:05:27.304106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.516 [2024-07-15 21:05:27.304148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.304602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.304629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.305076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.305103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.305498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.305527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.305993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.306020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.306377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.306408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.306746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.306774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.307241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.307271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.307732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.307759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.308207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.308236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.308684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.308712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.309158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.309188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.309637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.309665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.310117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.310157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.310482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.310512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.311000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.311028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.311514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.311544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.312011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.312491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.312521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.312990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.313018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.313495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.313523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.313990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.314018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.314460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.314490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.314968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.314996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.315466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.315501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.315949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.315976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.316427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.517 [2024-07-15 21:05:27.316456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.517 qpair failed and we were unable to recover it. 00:29:23.517 [2024-07-15 21:05:27.316903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.316931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.317379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.317408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.317877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.317904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.318366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.318395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.318847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.318875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.319293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.319323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.319673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.319702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.320167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.320196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.320633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.320660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.321108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.321145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.321631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.321659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.322143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.322173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.322640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.322668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.323115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.323154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.323593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.323621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.324071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.324100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.324585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.324614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.325032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.325602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.325704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.326358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.326460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.327005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.327040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.327527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.327559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.328021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.328051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.328490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.328520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.328984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.329013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.329473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.329505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.518 [2024-07-15 21:05:27.329922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.518 [2024-07-15 21:05:27.329951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.518 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.330392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.330421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.330900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.330928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.331469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.331571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.332162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.332200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.332659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.332689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.333034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.333062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.333536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.333567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.334031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.334060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.334457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.334486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.334968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.334997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.335509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.335625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.336178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.336217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.336731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.336760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.337329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.337432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.337920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.337956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.338411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.338443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.338904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.338935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.339422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.339453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.339898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.339926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.340489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.340593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.341176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.341217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.341700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.341729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.342186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.342216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.342685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.342713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.343182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.343234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.343602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.343636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.519 [2024-07-15 21:05:27.344037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.519 [2024-07-15 21:05:27.344066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.519 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.344561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.344592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.345062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.345090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.345650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.345681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.346163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.346193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.346663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.346691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.347073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.347101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.347619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.347647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.348094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.520 [2024-07-15 21:05:27.348135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.520 qpair failed and we were unable to recover it. 00:29:23.520 [2024-07-15 21:05:27.348596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.348625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.349112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.349152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.349629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.349657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.350190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.350220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.350652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.350680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.351138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.351166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.351595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.351623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.352075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.352103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.352554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.352584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.353030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.353058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.353635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.521 [2024-07-15 21:05:27.353738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.521 qpair failed and we were unable to recover it. 00:29:23.521 [2024-07-15 21:05:27.354426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.354529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.355059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.355094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.355532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.355563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.356034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.356062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.356503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.356546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.356982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.357011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.357497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.357526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.357991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.358019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.358508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.358539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.359003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.359033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.359526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.359556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.360021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.360048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.360520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.360549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.361016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.361043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.361519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.361548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.361990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.362018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.362450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.362480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.362948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.363470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.363500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.363947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.363974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.364431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.364928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.364955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.365401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.365429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.365729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.365759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.366222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.366251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.366710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.367186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.522 [2024-07-15 21:05:27.367215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.522 qpair failed and we were unable to recover it. 00:29:23.522 [2024-07-15 21:05:27.367710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.367738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.368189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.368218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.368691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.368720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.369167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.369196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.369650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.369678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.370139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.370168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.370606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.370635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.371083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.371111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.371465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.371494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.371952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.372457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.372486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.372818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.372847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.373332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.373361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.373817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.373844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.374358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.374387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.374824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.374853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.375301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.375330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.375770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.375804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.376271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.376301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.376749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.376776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.377245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.377274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.377752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.377780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.378245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.378273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.378619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.378658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.379143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.379173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.379618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.379647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.523 [2024-07-15 21:05:27.380112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.523 [2024-07-15 21:05:27.380167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.523 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.380667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.380696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.381197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.381249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.381632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.381660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.382136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.382166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.382681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.382710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.383056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.383083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.383464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.383961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.383988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.384383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.384413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.384927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.524 [2024-07-15 21:05:27.384955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.524 qpair failed and we were unable to recover it. 00:29:23.524 [2024-07-15 21:05:27.385427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.385457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.385921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.385950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.386511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.386613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.387200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.387263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.387746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.387782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.388252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.388284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.388825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.388854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.389319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.389803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.389831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.390301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.390331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.390814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.390842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.391221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.391250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.391667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.391694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.392163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.392191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.392631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.392660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.393146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.393175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.393635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.393663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.394135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.394165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.394630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.394658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.395114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.395154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.395643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.395678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.396144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-07-15 21:05:27.396174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-07-15 21:05:27.396484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.396511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.396896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.396926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.397527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.397632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.398372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.398473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.398917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.398952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.399462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.399494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.399945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.399973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.400408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.400438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.400909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.400938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.401414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.401444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.401893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.402383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.402413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.402885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.402915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.403455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.403558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.404117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.404187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.404604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.404634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.405137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.405168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.405637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.405666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.406016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.406628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.406731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.407390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.408054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.408090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.408491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.408524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.408984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.409012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.409540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.409993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.410024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.410477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.410507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.410983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.411012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.411460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.411489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.411930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.411959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.412437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.412466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.412938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.412966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.413412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.413441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.413777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.413805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.414245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.414275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.414715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.414743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.415219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.415248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.415760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-07-15 21:05:27.415788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-07-15 21:05:27.416144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.416180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.416649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.416677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.417136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.417165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.417672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.417700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.418159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.418192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.418631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.418659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.419114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.419154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.419625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.419653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.420101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.420145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.420610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.420639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.421031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.421059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.421626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.421726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.422335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.422438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.422981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.423016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.423519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.423552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.424008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.424037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.424482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.424512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.424970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.424999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.425455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.425484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.425935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.425962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.426408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.426439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.426784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.426818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.427293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.427322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.427796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.427823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.428260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.428289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.428754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.428782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.429261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.429289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.429762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.429791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.430320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.430350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.430819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.430847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.431310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.431342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.431801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.431830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.432274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.432303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.432672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.432700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.433181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.433211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.433763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.433791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.434265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.434294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.434762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.434790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.435147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.435188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.435697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.435726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-07-15 21:05:27.436160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-07-15 21:05:27.436198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.436687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.436718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.437190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.437240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.437756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.437784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.438340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.438442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.438990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.439524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.439557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.440025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.440053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.440531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.440561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.441025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.441054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.441486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.441515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.441914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.441943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.442393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.442423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.442868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.442896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.443464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.443569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.444118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.444192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.444674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.444704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.445191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.445732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.445760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.446211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.446242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.446724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.446753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.447260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.447289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.447764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.447792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.448268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.448298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.448781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.448809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.449190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.449220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.449704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.449733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.450181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.450213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.450666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.450695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.451161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.451191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.451550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.451919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.451957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.452418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.452448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.452920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.452948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.453457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.453487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.453924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.453952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.454431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.454461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.454920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.454948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.455388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.455416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.455819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.455847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.456307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.456345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.797 qpair failed and we were unable to recover it. 00:29:23.797 [2024-07-15 21:05:27.456809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.797 [2024-07-15 21:05:27.456836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.457285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.457315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.457759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.457787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.458236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.458266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.458735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.459212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.459240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.459760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.459788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.460275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.460769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.461183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.461212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.461687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.461715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.462188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.462218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.462620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.462648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.463115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.463159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.463613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.463644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.464138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.464168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.464566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.464593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.464959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.464997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.465458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.465489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.465848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.465877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.466409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.466439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.466923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.466952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.467509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.467611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.468173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.468211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.468628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.468657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 [2024-07-15 21:05:27.469161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.798 [2024-07-15 21:05:27.469192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:23.798 qpair failed and we were unable to recover it. 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Read completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.798 Write completed with error (sct=0, sc=8) 00:29:23.798 starting I/O failed 00:29:23.799 Write completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Write completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Write completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Write completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 Read completed with error (sct=0, sc=8) 00:29:23.799 starting I/O failed 00:29:23.799 [2024-07-15 21:05:27.469541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.799 [2024-07-15 21:05:27.470032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.470050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.470524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.470539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.470945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.470956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.471468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.471528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.471975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.471988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.472488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.472548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.473003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.473018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.473554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.473614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.474079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.474093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.474633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.474695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.475181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.475196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.475650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.475660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.476091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.476102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.476625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.476685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.477352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.477412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.477873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.477885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.478450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.478510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.478905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.478917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.479359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.479421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.479917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.479929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.480388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.480462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.480936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.480949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.481447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.481509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.482006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.482020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.482443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.482454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.482769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.482779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.483212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.483222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.483545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.483555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.484016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.484026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.484455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.484465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.484798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.484816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.485262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.485273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.485723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.485733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.486137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.486147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.486570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.486581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.799 [2024-07-15 21:05:27.487017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.799 [2024-07-15 21:05:27.487027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.799 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.487443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.487454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.487885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.487895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.488384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.488395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.488737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.488749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.489199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.489210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.489482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.489501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.489952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.489963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.490378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.490388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.490835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.490845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.491249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.491260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.491641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.491652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.492084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.492100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.492554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.492564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.493006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.493470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.493482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.493906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.493917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.494354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.494414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.494892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.494905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.495169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.495189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.495636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.495647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.496057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.496067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.496391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.496403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.496837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.496847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.497254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.497265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.497708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.497718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.498107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.498117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.498558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.498568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.498883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.498894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.499363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.499424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.499885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.499898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.500421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.500482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.500941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.500954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.501486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.501548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.501999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.502011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.502451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.502462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.502785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.502795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.503234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.503672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.503682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.504094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.504111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.504530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.504541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.504990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.800 [2024-07-15 21:05:27.505002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.800 qpair failed and we were unable to recover it. 00:29:23.800 [2024-07-15 21:05:27.505448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.505460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.505895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.505905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.506491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.506552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.507063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.507077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.507487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.507499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.508022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.508034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.508486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.508497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.508938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.509459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.509516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.509960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.509974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.510480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.510537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.511004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.511020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.511472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.511484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.511889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.511902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.512438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.512495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.512935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.512948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.513458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.513515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.513959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.513972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.514498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.514555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.514996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.515009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.515498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.515509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.515910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.515920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.516420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.516478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.516932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.516947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.517460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.517517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.517972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.517984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.518490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.518547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.518982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.518996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.519492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.519549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.519980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.519994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.520544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.520601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.521040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.521052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.521590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.521646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.522095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.522108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.522509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.522521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.522928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.522938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.523531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.523588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.524040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.524053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.524562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.524624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.525060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.525073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.525398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.525412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.801 qpair failed and we were unable to recover it. 00:29:23.801 [2024-07-15 21:05:27.525837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.801 [2024-07-15 21:05:27.525847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.526369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.526862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.526875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.527314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.527326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.527729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.527740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.528138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.528151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.528575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.528587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.528987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.529000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.529451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.529464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.529948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.529959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.530363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.530374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.530782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.530792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.531218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.531230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.531665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.531675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.532104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.532115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.532543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.532554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.532956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.532966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.533502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.533559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.533996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.534009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.534455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.534467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.534866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.535417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.535474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.535951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.536451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.536507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.536935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.536954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.537475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.537534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.537972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.537985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.538496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.538552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.539001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.539013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.539445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.539457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.539866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.539877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.540417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.540474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.540920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.540933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.541474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.541530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.541964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.541977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.542436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.542493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.542828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.542844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.543395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.543451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.543922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.543936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.544339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.544352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.802 [2024-07-15 21:05:27.544784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.802 [2024-07-15 21:05:27.544797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.802 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.545231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.545243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.545660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.545671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.546106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.546117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.546559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.546974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.546985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.547395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.547451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.547909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.547922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.548448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.548504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.548841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.548853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.549425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.549483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.549940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.549959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.550485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.550541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.550985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.550998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.551333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.551347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.551768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.551778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.552180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.552192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.552642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.552651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.552943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.552953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.553435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.553446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.553845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.553856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.554243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.554256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.554573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.554582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.554982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.554992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.555435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.555447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.555886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.555898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.556465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.556522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.556962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.556977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.557511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.557566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.557996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.558008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.558487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.558498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.558902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.558913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.559446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.559503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.559949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.559961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.560468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.560525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.560997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.561012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.561431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.803 [2024-07-15 21:05:27.561443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.803 qpair failed and we were unable to recover it. 00:29:23.803 [2024-07-15 21:05:27.561769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.561782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.562223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.562234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.562647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.562657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.563058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.563068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.563494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.563505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.563961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.563972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.564504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.564560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.565010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.565024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.565469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.565480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.565919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.565930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.566476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.566532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.566872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.567240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.567251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.567692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.567703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.568004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.568016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.568466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.568478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.568876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.568890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.569299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.569310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.569739] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.569749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.570174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.570185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.570587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.570597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.571029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.571041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.571464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.571474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.571883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.571893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.572337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.572347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.572658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.572668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.573032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.573043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.573526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.573538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.573929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.573940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.574379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.574392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.574854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.574866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.575304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.575357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.575802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.575815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.576253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.576685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.576696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.577114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.577129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.577587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.577597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.578016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.578025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.578469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.578480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.578919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.578928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.804 [2024-07-15 21:05:27.579358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.804 [2024-07-15 21:05:27.579411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.804 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.579862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.579875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.580139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.580162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.580601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.580614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.581014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.581024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.581539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.581593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.582035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.582048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.582519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.582530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.582966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.582977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.583505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.583558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.584006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.584019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.584417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.584778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.584787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.585271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.585282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.585594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.585603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.586049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.586059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.586459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.586470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.586870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.586879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.587289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.587299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.587731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.587741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.588068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.588084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.588498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.588510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.588929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.588938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.589186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.589206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.589657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.590070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.590080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.590475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.590485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.590882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.590891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.591313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.591323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.591861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.591876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.592418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.592472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.592906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.592918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.593440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.593492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.593929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.594457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.594509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.594978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.594990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.595552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.595606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.596047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.596060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.596563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.596617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.597056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.597070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.597389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.597402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.805 qpair failed and we were unable to recover it. 00:29:23.805 [2024-07-15 21:05:27.597817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.805 [2024-07-15 21:05:27.597827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.598096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.598107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.598441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.598453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.598876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.598885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.599389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.599441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.599925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.599939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.600386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.600439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.600882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.600895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.601412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.601465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.601910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.601923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.602453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.602505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.602952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.602964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.603483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.603536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.603993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.604006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.604448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.604459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.604883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.604902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.605441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.605494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.605944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.605959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.606464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.606518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.606983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.606996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.607401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.607413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.607845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.607856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.608296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.608306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.608774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.608787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.609119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.609136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.609532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.609543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.609954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.609963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.610485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.610538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.610971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.610984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.611428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.611482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.611926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.611939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.612472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.612525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.612868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.612880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.613327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.613381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.613810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.613824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.614237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.614248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.614717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.614726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.615116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.615134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.615533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.615544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.615839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.615849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.616441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.806 [2024-07-15 21:05:27.616494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.806 qpair failed and we were unable to recover it. 00:29:23.806 [2024-07-15 21:05:27.616959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.616972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.617403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.617456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.617898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.617915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.618418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.618473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.618937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.618950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.619452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.619505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.619954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.619966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.620377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.620431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.620951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.620965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.621543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.621596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.622045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.622621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.622675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.623137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.623151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.623485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.623496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.623926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.623938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.624367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.624424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.624857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.625339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.625858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.625873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.626405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.626459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.626909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.626922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.627447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.627500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.627947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.627960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.628498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.628550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.629001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.629014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.629405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.629416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.629846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.629856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.630367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.630420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.630857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.630869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.631218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.631230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.631652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.631662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.632111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.632121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.632587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.632597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.633056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.633065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.633565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.633575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.633967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.807 [2024-07-15 21:05:27.633978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.807 qpair failed and we were unable to recover it. 00:29:23.807 [2024-07-15 21:05:27.634536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.634589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.635035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.635047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.635604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.635658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.636163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.636583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.636592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.636994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.637004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.637436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.637452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.637861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.637871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.638276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.638287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.638508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.638521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.638948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.638959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.639291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.639302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.639733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.640161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.640171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.640595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.640604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.641019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.641029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.641442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.641453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.641885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.641894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.642271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.642282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.642706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.642715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.643118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.643134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.643553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.643564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.643955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.643965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.644387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.644397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.644853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.644863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.645212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.645224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.645621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.645630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.646097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.646107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.646514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.646524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.646925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.646936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.647262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.647272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.647753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.647765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.648156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.648167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.648580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.648593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.649007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.649016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.649384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.649394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.649808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.649817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.650217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.650227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.650542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.650551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.651005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.651015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.651407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.808 [2024-07-15 21:05:27.651417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.808 qpair failed and we were unable to recover it. 00:29:23.808 [2024-07-15 21:05:27.651842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.651853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.652278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.652289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.652766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.652776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.653173] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.653183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.653621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.653630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.654035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.654044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.654534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.654545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.654961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.654971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.655392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.655403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.655801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.655812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.656339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.656392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.656880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.656893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.657380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.657392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.657786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.657796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.658344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.658397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.658869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.658881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.659289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.659299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.659723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.659733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.660145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.660155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.660589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.660599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.661000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.661010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.661414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.661425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.661869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.661878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.662302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.662313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.662765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.662775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.663170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.663180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.663619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.663630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.664044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.664054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.664462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.664472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.664897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.664907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.665393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.665404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.665842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.665851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.666347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.666397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.666847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.666860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.667306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.667317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.667714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.667724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.668113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.668127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.668576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.668586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.668986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.668996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.669510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.669561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.670084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.670097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.809 qpair failed and we were unable to recover it. 00:29:23.809 [2024-07-15 21:05:27.670605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.809 [2024-07-15 21:05:27.670655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.671159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.671192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.671579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.671589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.672018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.672029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.672381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.672391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.672795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.672805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.673206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.673217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.673595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.673606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.674096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.674106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.674431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.674442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.674857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.674867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.675271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.675281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.675709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.675718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.676140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.676153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.676548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.676558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.676952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.676963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.677315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.677326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:23.810 [2024-07-15 21:05:27.677751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.810 [2024-07-15 21:05:27.677761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:23.810 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.678179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.678193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.678635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.678650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.679069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.679080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.679511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.679521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.679921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.680353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.680366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.680789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.680800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.681129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.681574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.681584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.682014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.682024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.682505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.682515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.682930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.682939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.683475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.683525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.683949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.683962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.684491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.684541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.684878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.684892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.685415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.685466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.685830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.685842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.686302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.686313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.686768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.686778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.687354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.687404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.687838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.687850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.688245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.688255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.688545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.082 [2024-07-15 21:05:27.688555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.082 qpair failed and we were unable to recover it. 00:29:24.082 [2024-07-15 21:05:27.688987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.688996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.689405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.689417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.689762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.689773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.690017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.690035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.690443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.690460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.690863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.690873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.691302] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.691313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.691625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.691946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.691956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.692244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.692254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.692665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.692676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.693121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.693138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.693657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.693667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.693975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.693985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.694495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.694545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.694989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.695001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.695400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.695411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.695846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.695856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.696365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.696415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.696755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.696768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.697103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.697560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.697572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.697996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.698006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.698395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.698406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.698900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.698910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.699473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.699523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.699968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.699981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.700558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.700607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.700967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.700979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.701511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.701561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.701940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.701952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.702514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.702565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.703018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.703031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.703462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.703473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.703898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.703909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.704461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.704510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.705019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.705032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.705277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.705289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.705548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.705565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.706003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.706014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.083 qpair failed and we were unable to recover it. 00:29:24.083 [2024-07-15 21:05:27.706329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.083 [2024-07-15 21:05:27.706341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.706749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.706760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.707192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.707203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.707650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.707661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.708078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.708089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.708411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.708422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.708852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.708863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.709351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.709362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.709799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.709809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.710205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.710216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.710537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.710548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.710977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.710987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.711431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.711442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.711856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.711866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.712396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.712860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.712874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.713300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.713312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.713640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.713652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.713934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.713945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.714399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.714411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.714833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.714843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.715372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.715421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.715615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.715631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.715955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.715966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.716397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.716408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.716832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.716843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.717273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.717284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.717713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.717724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.718152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.718163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.718619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.718630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.719056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.719067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.719480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.719492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.719805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.719821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.720247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.720258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.720711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.721043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.721054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.721488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.721499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.721918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.721929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.722349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.722360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.722758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.722767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.723197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.723208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.084 [2024-07-15 21:05:27.723644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.084 [2024-07-15 21:05:27.723654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.084 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.724198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.724209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.724627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.724637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.725059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.725070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.725544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.725554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.725989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.726000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.726319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.726330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.726794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.726804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.727246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.727256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.727695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.727705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.728136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.728146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.728571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.728581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.729001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.729012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.729322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.729332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.729800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.729810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.730206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.730216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.730555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.730565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.730916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.730925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.731349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.731363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.731845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.731854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.732241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.732251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.732683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.733120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.733136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.733557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.733566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.733943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.733952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.734464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.734511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.734847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.734859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.735379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.735426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.735844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.735857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.736195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.736205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.736622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.736632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.737019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.737029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.737339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.737349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.737777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.737788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.738230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.738241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.738657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.738667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.739080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.739090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.739532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.739542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.739926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.740262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.740272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.740691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.740700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.085 qpair failed and we were unable to recover it. 00:29:24.085 [2024-07-15 21:05:27.741105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.085 [2024-07-15 21:05:27.741115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.741539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.741549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.741767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.741777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.742195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.742205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.742525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.742537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.742941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.742951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.743258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.743268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.743685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.743694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.744004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.744013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.744517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.744527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.744920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.744930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.745259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.745708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.745718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.746025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.746036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.746554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.746564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.746995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.747006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.747426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.747436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.747867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.747877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.748301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.748311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.748616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.748626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.749043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.749053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.749528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.749537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.749924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.749933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.750253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.750270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.750686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.750696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.751116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.751140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.751537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.751920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.751930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.752445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.752491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.752791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.752806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.753235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.753665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.753674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.754120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.754141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.754364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.754373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.086 [2024-07-15 21:05:27.754788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.086 [2024-07-15 21:05:27.754797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.086 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.755103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.755530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.755540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.755953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.755963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.756395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.756441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.756872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.756885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.757379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.757425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.757655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.757667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.758039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.758049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.758573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.758584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.758972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.758982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.759505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.759550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.759992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.760004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.760409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.760420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.760861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.760871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.761164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.761175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.761518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.761848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.761859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.762153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.762163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.762590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.762600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.762893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.762903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.763346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.763356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.763661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.763672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.764074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.764083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.764508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.764517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.764902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.764913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.765320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.765330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.765631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.765641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.766050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.766059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.766533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.766543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.766987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.766998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.767495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.767541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.768006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.768018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.768333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.768344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.768654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.768664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.769136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.769146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.769545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.769554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.769949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.769960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.770379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.770395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.770769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.770778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.087 [2024-07-15 21:05:27.771218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.087 [2024-07-15 21:05:27.771228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.087 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.771618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.771628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.772043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.772053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.772515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.772526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.772997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.773007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.773457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.773467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.773936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.773945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.774258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.774269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.774685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.774694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.775087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.775096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.775565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.775575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.775920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.775930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.776382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.776427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.776761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.776775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.777197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.777208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.777638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.777648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.778052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.778062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.778540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.778550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.778744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.778754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.779203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.779214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.779626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.780050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.780059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.780460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.780471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.780906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.780916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.781345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.781355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.781749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.781764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.782184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.782194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.782634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.782644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.782946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.782956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.783387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.783397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.783791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.783800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.784207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.784217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.784620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.784629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.785020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.785030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.785443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.785453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.785781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.785791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.786201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.786211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.088 qpair failed and we were unable to recover it. 00:29:24.088 [2024-07-15 21:05:27.786621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.088 [2024-07-15 21:05:27.786631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.787034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.787044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.787519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.787529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.787924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.787934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.788277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.788287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.788591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.788601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.789023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.789032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.789450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.789460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.789838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.789848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.790294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.790304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.790682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.790691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.790907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.790922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.791322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.791332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.791725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.791736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.792038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.792049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.792476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.792486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.792715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.792727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.793129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.793140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.793500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.793509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.793937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.793947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.794437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.794448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.794852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.794863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.795427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.795471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.795910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.795924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.796459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.796952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.796964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.797464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.797508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.797939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.797952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.798468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.798512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.798950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.798963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.799463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.799508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.799933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.799946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.800474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.800518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.800965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.800977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.801421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.801465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.801906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.801918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.802484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.802529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.802966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.802981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.803492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.803537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-07-15 21:05:27.803975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-07-15 21:05:27.803987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.804490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.804534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.805054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.805067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.805589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.805633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.806076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.806089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.806593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.806638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.807040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.807053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.807522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.807533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.807920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.807931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.808469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.808514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.808950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.808962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.809479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.809524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.809963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.809976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.810468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.810512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.810952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.810965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.811470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.811514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.811952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.811965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.812462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.812512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.812931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.812944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.813459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.813503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.813952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.813965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.814525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.814570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.815003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.815015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.815417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.815427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.815864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.815874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.816387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.816432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.816869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.816881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.817364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.817408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.817832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.817844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.818233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.818580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.818591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.819024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.819035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.819446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.819456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.819913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.819923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.820330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.820340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.820733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.820743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.821160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.821171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.821585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.821595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.821982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-07-15 21:05:27.821991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-07-15 21:05:27.822422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.822433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.822822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.822831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.823347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.823390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.823826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.823838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.824232] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.824244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.824727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.824743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.825154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.825166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.825585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.825595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.826061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.826071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.826512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.826522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.826909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.826918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.827242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.827253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.827570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.827581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.827990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.828000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.828351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.828361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.828831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.828841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.829359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.829402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.829833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.829846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.830231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.830243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.830670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.830680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.831118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.831133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.831527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.831537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.831969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.831980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.832483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.832526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.832972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.832984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.833477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.833520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.833955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.833968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.834491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.834533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.834962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.834974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.835483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.835525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.835965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.835978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.836475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.836518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.836945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.836963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.837459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-07-15 21:05:27.837501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-07-15 21:05:27.837932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.837944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.838466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.838510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.838952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.838964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.839456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.839499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.839768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.839783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.840199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.840210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.840626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.840637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.841053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.841063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.841524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.841534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.841946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.841956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.842441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.842483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.842735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.842750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.843187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.843199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.843619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.843628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.844022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.844031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.844452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.844869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.845264] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.845275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.845686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.845696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.846129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.846140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.846346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.846360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.846761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.846771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.847202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.847212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.847612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.847621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.847937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.847947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.848359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.848369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.848772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.848782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.849167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.849177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.849610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.849620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.850006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.850016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.850481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.850491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.850903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.850912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.851296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.851305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.851700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.851711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.852037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.852047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.852545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-07-15 21:05:27.852555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-07-15 21:05:27.852859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.852870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.853287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.853297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.853785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.853795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.854225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.854235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.854657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.854666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.855053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.855063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.855471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.855481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.855903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.855913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.856332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.856343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.856749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.856758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.857166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.857176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.857608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.857619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.857942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.857953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.858368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.858379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.858791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.858801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.859231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.859241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.859651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.859660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.859983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.859994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.860384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.860394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.860782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.860792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.861177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.861186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.861575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.861584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.861889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.862309] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.862319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.862718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.862727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.863157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.863175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.863564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.863574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.863961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.863971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.864379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.864389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.864784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.865186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.865199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.865613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.865623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.866032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.866042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.866362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.866804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.866814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.867203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.867214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.867604] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.867614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.868020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.868029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.868450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.868460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.868844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.868854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.869269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.869280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-07-15 21:05:27.869701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-07-15 21:05:27.869710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.870117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.870141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.870573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.870582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.870971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.870981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.871471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.871515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.871943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.871955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.872472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.872515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.872994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.873006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.873423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.873434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.873861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.874103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.874118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.874537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.874547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.874937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.874946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.875490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.875533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.875979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.875992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.876477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.876519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.876956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.876973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.877499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.877542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.877982] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.877994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.878495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.878538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.878978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.878991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.879509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.879552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.879996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.880008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.880483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.880494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.880882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.880892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.881382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.881425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.881767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.881779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.882191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.882202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.882606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.882616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.883005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.883015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.883481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.883491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.883877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.883887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.884313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.884324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.884762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.884772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.885156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.885166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.885492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.885502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.885910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.885920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.886372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.886383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.886718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.886729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.887182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-07-15 21:05:27.887192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-07-15 21:05:27.887619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.887629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.888027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.888036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.888437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.888448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.888829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.888839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.889225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.889235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.889666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.889675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.890070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.890079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.890478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.890488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.890858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.890867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.891287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.891297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.891738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.891748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.892172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.892182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.892402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.892416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.892848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.892858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.893246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.893257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.893688] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.893698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.894101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.894111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.894535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.894545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.894924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.894935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.895350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.895360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.895790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.895799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.896319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.896359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.896796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.896809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.897203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.897214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.897626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.897636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.897849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.897864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.898321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.898332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.898745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.898756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.899159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.899169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.899576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.899586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.900012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.900021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-07-15 21:05:27.900437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-07-15 21:05:27.900448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.900829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.900838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.901262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.901272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.901675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.901685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.902095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.902104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.902558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.902568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.903039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.903049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.903483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.903495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.903903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.903914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.904426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.904468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.904893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.904905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.905395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.905437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.905861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.905874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.906373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.906420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.906847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.906860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.907276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.907286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.907731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.907741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.908152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.908164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.908502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.908512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.908907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.908917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.909300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.909310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.909727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.909736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.910146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.910157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.910596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.910605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.910988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.910998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.911396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.911406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.911834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.911843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.912227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.912238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.912640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.912650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.913026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.913035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.913448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.913458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.913868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.913878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.914335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.914345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.914742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.914752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.915147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.915157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.915573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.915582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.915965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.915975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.916391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.916402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.916779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.916789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.917083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.917094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.917491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-07-15 21:05:27.917503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-07-15 21:05:27.917795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.917805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.918239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.918249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.918652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.918662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.919081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.919092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.919501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.919511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.919910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.919919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.920368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.920410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.920823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.920837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.921266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.921276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.921617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.921627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.922053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.922062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.922464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.922475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.922864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.922874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.923323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.923334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.924253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.924280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.924577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.924597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.925017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.925028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.925474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.925485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.925909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.925918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.926300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.926311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.926719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.926730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.927159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.927170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.927579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.927589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.927930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.927940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.928349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.928359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.928692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.928710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.929142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.929156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.929452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.929464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.929907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.929916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.930311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.930321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.930734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.930744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.931135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.931145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.931431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.931440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.931811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.931821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.932234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.932243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.932648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.932658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.932979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.932990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.933419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.933429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.933834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.933843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.934355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.934365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.934748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-07-15 21:05:27.934758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-07-15 21:05:27.935164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.935173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.935570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.935579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.935994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.936004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.936454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.936464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.936854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.936864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.937322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.937332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.937666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.937676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.938094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.938103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.938557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.938567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.938986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.938997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.939414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.939455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.939729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.939741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.940163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.940175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.940624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.940634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.941022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.941031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.941354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.941364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.941804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.941814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.942296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.942306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.942717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.942726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.943160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.943170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.943635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.943645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.944033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.944042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.944304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.944313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.944654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.944663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.945073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.945082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.945510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.945520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.945932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.945942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.946468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.946509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.946954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.946966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.947548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.947589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.947935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.947947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.948393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.948433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.948674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.948689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.949096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.949107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.949525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.949536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.949929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.949939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.950491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.950531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.950977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.950989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.951518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.951559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.951855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.951869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-07-15 21:05:27.952366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-07-15 21:05:27.952407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.952859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.952871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.953345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.953386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.953815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.953827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.954268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.954278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.954665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.954674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.955058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.955069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.955479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.955490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.955881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.955890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.956230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.956241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.956570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.956580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.956991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.957001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.957333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.957344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.957759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.957773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.958190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.958201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.958718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.958728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.959205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.959216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.959662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.959672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.960063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.960072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.960557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.960567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.960965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.960975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.961388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.961398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.961814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.961823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.962341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.962381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.962718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.962730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.963061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.963071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.963468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.963478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-07-15 21:05:27.963800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-07-15 21:05:27.963810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.964214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.964227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.964704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.964715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.965153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.965163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.965567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.965934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.965943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.966345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.966356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.966666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.966676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.967079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.967089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.967426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.967438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.967883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.371 [2024-07-15 21:05:27.967893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.371 qpair failed and we were unable to recover it. 00:29:24.371 [2024-07-15 21:05:27.968283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.968293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.968707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.968717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.969066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.969080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.969477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.969487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.969926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.969936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.970432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.970909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.970921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.971489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.971530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.971978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.971990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.972541] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.972582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.972916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.972928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.973435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.973475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.973906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.973918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.974490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.974530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.974968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.974980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.975490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.975530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.975975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.975987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.976536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.976577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.977009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.977021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.977441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.977481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.977966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.977978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.978488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.978529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.978960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.978973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.979428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.979469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.979898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.979910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.980465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.980506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.980913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.980925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.981357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.981398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.981816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.981828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.982338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.982379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.372 qpair failed and we were unable to recover it. 00:29:24.372 [2024-07-15 21:05:27.982824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.372 [2024-07-15 21:05:27.982836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.983060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.983074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.983487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.983497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.983908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.983918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.984482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.984523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.984967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.984980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.985411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.985452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.985869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.985882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.986416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.986456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.986892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.986905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.987140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.987156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.987602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.987612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.988000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.988009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.988418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.988458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.988778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.988790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.989202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.989213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.989647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.989657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.989957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.989967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.990391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.990401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.990784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.990794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.991223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.991233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.991648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.991658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.991932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.992422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.992861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.993416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.993456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.993915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.993927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.994431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.994470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.994907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.994920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.995432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.995473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.995882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.995894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.996382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.373 [2024-07-15 21:05:27.996422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.373 qpair failed and we were unable to recover it. 00:29:24.373 [2024-07-15 21:05:27.996855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.996867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.997296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.997335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.997774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.997786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.998102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.998112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.998548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.998557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.998940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.998950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.999438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.999477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:27.999807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:27.999819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.000225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.000241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.000653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.000663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.001051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.001061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.001469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.001479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.001781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.001791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.002207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.002217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.002630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.002640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.002929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.002938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.003434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.003444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.003874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.003884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.004357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.004396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.004830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.005279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.005290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.005658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.005668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.006083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.006505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.006515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.006949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.006959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.007385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.007425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.007755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.007767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.008184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.008194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.008652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.008662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.009054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.009064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.009457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.009467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.009877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.374 [2024-07-15 21:05:28.009887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.374 qpair failed and we were unable to recover it. 00:29:24.374 [2024-07-15 21:05:28.010329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.010339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.010744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.010753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.011172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.011182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.011593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.011610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.012083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.012093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.012510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.012521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.013025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.013035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.013417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.013428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.013728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.013738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.014140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.014150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.014571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.014581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.015010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.015021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.015524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.015535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.015925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.015934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.016343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.016353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.016756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.016766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.017181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.017192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.017579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.017589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.018012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.018022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.018443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.018453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.018840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.018850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.019265] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.019276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.019659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.019669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.020121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.020136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.020538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.020547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.020918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.020928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.021376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.021874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.021885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.022389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.022428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.022842] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.022854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.023278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.023293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.023766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.023777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.024082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.024092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.375 [2024-07-15 21:05:28.024409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.375 [2024-07-15 21:05:28.024419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.375 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.024880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.024890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.025349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.025389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.025825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.025837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.026223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.026233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.026649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.026660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.027067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.027077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.027527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.027537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.027970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.027981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.028488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.028528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.028972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.028985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.029457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.029496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.029898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.029910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.030350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.030389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.030826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.030838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.031354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.031395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.031729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.031742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.032160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.032171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.032607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.032619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.033012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.033025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.033492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.033504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.033896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.033905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.034306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.034703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.034712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.035095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.035105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.035483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.035494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.035796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.035806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.036239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.036249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.036677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.036687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.037164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.037175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.037619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.037628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.038052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.038061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.038499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.376 [2024-07-15 21:05:28.038508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.376 qpair failed and we were unable to recover it. 00:29:24.376 [2024-07-15 21:05:28.038928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.038939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.039347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.039358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.039792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.039804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.040206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.040215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.040545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.040554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.040866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.040876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.041350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.041667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.041677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.042130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.042140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.042521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.042531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.042964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.042973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.043468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.043507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.043850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.043863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.044176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.044187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.044665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.045061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.045070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.045457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.045467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.045879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.046301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.046311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.046774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.046783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.047167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.047177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.047638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.047648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.047981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.047990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.048427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.048437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.048845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.048854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.049382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.049420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.049864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.049876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.377 [2024-07-15 21:05:28.050407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.377 [2024-07-15 21:05:28.050447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.377 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.050865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.050876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.051386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.051425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.051813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.051825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.052271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.052717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.052733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.053137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.053148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.053433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.053444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.053859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.053870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.054295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.054305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.054717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.054727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.055146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.055157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.055585] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.055595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.056005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.056014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.056411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.056421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.056813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.056823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.057034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.057047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.057467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.057477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.057857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.057866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.058252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.058263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.058726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.058736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.059142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.059151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.059564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.059573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.059960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.059969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.060369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.060379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.060814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.060825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.061237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.061247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.061646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.061655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.061943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.061952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.062267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.062278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.062699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.062709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.063133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.063144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.378 [2024-07-15 21:05:28.063565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.378 [2024-07-15 21:05:28.063577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.378 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.064036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.064046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.064333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.064343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.064730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.064740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.065031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.065041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.065423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.065433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.065727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.065736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.066175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.066185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.066608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.066617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.066999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.067008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.067421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.067430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.067807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.067817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.068177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.068187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.068633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.068643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.069050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.069061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.069471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.069481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.069864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.069873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.070339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.070349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.070746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.070755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.071151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.071168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.071655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.071665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.072053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.072062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.072292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.072301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.072718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.072727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.073117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.073131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.073527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.073537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.073939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.073948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.074222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.074233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.074625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.074635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.074936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.074946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.075379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.075389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.075811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.075820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.379 [2024-07-15 21:05:28.076198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.379 [2024-07-15 21:05:28.076209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.379 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.076609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.076619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.076942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.076951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.077245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.077254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.077745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.078135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.078145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.078589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.078599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.078991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.079000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.079403] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.079413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.079831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.079841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.080355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.080393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.080825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.080838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.081249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.081260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.081698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.081707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.082141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.082152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.082544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.082554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.082937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.082947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.083356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.083394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.083812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.083825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.084365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.084404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.084818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.084830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.085277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.085288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.085703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.085713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.086151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.086161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.086579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.086588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.087012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.087022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.087414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.087424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.087807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.087816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.088205] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.088216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.088511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.088521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.088928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.088938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.089349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.089360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.380 [2024-07-15 21:05:28.089767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.380 [2024-07-15 21:05:28.089776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.380 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.090160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.090170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.090393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.090403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.090837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.090846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.091159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.091171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.091567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.091577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.091958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.091967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.092363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.092373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.092797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.092806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.093197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.093208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.093629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.094084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.094094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.094505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.094516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.094920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.094929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.095336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.095345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.095724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.095734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.096185] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.096195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.096571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.096580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.096840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.096854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.097279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.097289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.097690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.097700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.098145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.098155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.098566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.098575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.098965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.098974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.099381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.099391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.099785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.099794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.100251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.100262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.100689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.100698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.101086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.101095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.101503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.101512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.101896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.101905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.102411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.102453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.102885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.102897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.103274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.381 [2024-07-15 21:05:28.103285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.381 qpair failed and we were unable to recover it. 00:29:24.381 [2024-07-15 21:05:28.103715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.103725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.104132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.104143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.104532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.104551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.104833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.104842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.105356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.105395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.105828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.105840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.106239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.106250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.106667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.106677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.107087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.107097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.107405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.107414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.107819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.107828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.108222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.108232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.108654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.108663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.109070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.109080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.109491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.109501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.109882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.109891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.110285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.110295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.110719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.110729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.111152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.111162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.111567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.111578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.111958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.111968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.112414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.112424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.112806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.112815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.113126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.113137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.113432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.113444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.113860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.114069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.382 qpair failed and we were unable to recover it. 00:29:24.382 [2024-07-15 21:05:28.114481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.382 [2024-07-15 21:05:28.114493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.114899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.114909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.115392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.115429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.115781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.115794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.116100] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.116110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.116558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.116568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.116956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.116965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.117455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.117492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.117949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.117962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.118454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.118492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.118926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.118938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.119437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.119476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.119702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.119717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.120133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.120145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.120610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.120620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.121011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.121021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.121433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.121443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.121833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.121843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.122344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.122381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.122731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.122742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.123241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.123662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.123672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.124066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.124076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.124489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.124500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.124995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.125005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.125431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.125442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.125825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.125835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.126259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.126269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.126664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.126673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.126880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.126894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.127304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.127315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.127694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.127704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.128084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.383 [2024-07-15 21:05:28.128500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.383 [2024-07-15 21:05:28.128510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.383 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.128896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.128906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.129140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.129153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.129562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.129573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.129947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.129956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.130421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.130458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.130869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.130881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.131368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.131406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.131845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.131856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.132369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.132407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.132934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.132946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.133339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.133376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.133815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.133827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.134347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.134386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.134827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.134839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.135170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.135181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.135597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.135607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.135990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.136000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.136391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.136401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.136815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.136825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.137230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.137240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.137670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.137680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.138096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.138106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.138502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.138513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.138930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.138940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.139337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.139347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.139775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.139785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.140328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.140365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.140800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.140811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.141231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.141242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.141649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.141659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.141955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.141965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.384 qpair failed and we were unable to recover it. 00:29:24.384 [2024-07-15 21:05:28.142388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.384 [2024-07-15 21:05:28.142402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.142792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.142801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.143227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.143237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.143644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.143653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.144070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.144080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.144508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.144519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.144898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.144907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.145293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.145302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.145715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.146188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.146198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.146572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.146582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.146960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.146970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.147377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.147387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.147764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.147774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.148086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.148095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.148498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.148509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.148963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.148972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.149499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.149985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.149997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.150474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.150511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.150948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.150961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.151480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.151518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.151935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.151947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.152465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.152502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.152945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.152957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.153351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.153389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.153845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.153856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.154351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.154393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.154729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.154741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.155152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.155164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.155655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.155665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.156073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.156083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.156492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.385 [2024-07-15 21:05:28.156502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.385 qpair failed and we were unable to recover it. 00:29:24.385 [2024-07-15 21:05:28.156928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.156939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.157439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.157476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.157940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.157952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.158451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.158489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.158922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.158934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.159447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.159485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.159935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.159947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.160462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.160499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.160931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.160943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.161433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.161471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.161890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.161903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.162395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.162433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.162864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.162876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.163374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.163412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.163864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.163875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.164398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.164436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.164865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.164877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.165398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.165435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.165875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.165887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.166371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.166409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.166837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.166849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.167285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.167295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.167711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.167722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.168053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.168063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.168472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.168482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.168793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.168804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.169221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.169232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.169633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.169642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.170027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.170037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.170385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.170396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.170802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.170811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.171189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.171199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.171624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.171633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.386 [2024-07-15 21:05:28.172041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.386 [2024-07-15 21:05:28.172051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.386 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.172458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.172468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.172885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.172895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.173289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.173298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.173721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.173731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.174141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.174152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.174561] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.174571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.174951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.175392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.175403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.175781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.175791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.176096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.176106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.176411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.176421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.176862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.176872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.177260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.177270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.177682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.177691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.178125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.178135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.178526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.178536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.178818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.178829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.179238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.179249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.179631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.179640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.180038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.180048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.180268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.180281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.180592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.180601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.181000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.181010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.181394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.181404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.181755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.181771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.182178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.182188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.182584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.387 [2024-07-15 21:05:28.182594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.387 qpair failed and we were unable to recover it. 00:29:24.387 [2024-07-15 21:05:28.182998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.183007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.183388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.183404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.183810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.183819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.184209] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.184219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.184621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.184631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.185011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.185020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.185429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.185861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.185871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.186166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.186175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.186586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.186595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.186900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.186911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.187341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.187351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.187730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.187739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.188117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.188130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.188539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.188548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.188977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.188988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.189562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.189598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.190048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.190060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.190479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.190490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.190895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.190906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.191398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.191874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.191885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.192387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.192423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.192858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.192870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.193368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.193406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.193883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.193894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.194375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.194412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.194847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.194858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.195350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.195392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.195805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.195819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.196228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.196239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.196543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.196553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.388 [2024-07-15 21:05:28.196957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.388 [2024-07-15 21:05:28.196967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.388 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.197398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.197409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.197794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.197803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.198184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.198194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.198596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.198605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.199042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.199053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.200286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.200311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.200764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.200775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.201072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.201083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.201475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.201485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.201784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.201793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.202216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.202226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.202531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.202544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.202946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.202956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.203410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.203420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.203814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.203824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.204203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.204212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.204628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.204639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.205043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.205054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.205452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.205462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.205864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.205874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.206275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.206285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.206672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.206682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.207920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.207945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.208462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.208499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.208750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.208764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.209148] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.209159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.209587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.209597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.209985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.209996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.210420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.210430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.210733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.210744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.211169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.211179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.211476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.389 [2024-07-15 21:05:28.211486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.389 qpair failed and we were unable to recover it. 00:29:24.389 [2024-07-15 21:05:28.211707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.211719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.212110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.212120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.213014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.213036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.213454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.213465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.213882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.213893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.214296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.214306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.214587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.214598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.215004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.215014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.215481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.215490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.215911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.215920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.216312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.216322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.216717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.216727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.217131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.217141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.217507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.217517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.217944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.217954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.218463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.218499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.218841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.218853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.219162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.219173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.219484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.219494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.219890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.219899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.220369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.220379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.220762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.220771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.221187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.221197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.221590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.221600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.222009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.222019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.222332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.222342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.222644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.222654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.223057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.223067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.223463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.223474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.223880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.223890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.224287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.224297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.224731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.224741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.224951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.224961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.390 qpair failed and we were unable to recover it. 00:29:24.390 [2024-07-15 21:05:28.225369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.390 [2024-07-15 21:05:28.225378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.225772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.225783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.226159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.226169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.226601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.226611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.227095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.227104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.227645] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.227656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.228042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.228467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.228477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.228870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.228881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.229375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.229412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.229860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.229873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.230410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.230447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.230903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.230916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.231478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.231515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.231986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.231997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.232488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.232526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.232940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.232952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.233186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.233202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.233594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.233604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.233916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.233934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.234258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.234268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.234678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.234688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.235013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.235023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.235431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.235442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.235839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.235849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.236241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.236255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.236462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.236474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.236907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.236917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.237334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.237344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.237797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.237806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.238274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.391 [2024-07-15 21:05:28.238284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.391 qpair failed and we were unable to recover it. 00:29:24.391 [2024-07-15 21:05:28.238691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.238701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.239109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.239119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.239553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.239563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.239963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.239972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.240451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.240488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.240921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.240933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.241483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.241520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.241918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.241930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.242338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.242374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.242829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.242841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.243315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.243353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.243784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.243797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.244210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.244221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.244633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.244643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.245056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.245066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.245497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.245507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.245813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.245823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.246146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.246156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.246588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.246597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.247001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.247010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.247324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.247334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.247737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.247751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.248046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.248056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.248553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.248563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.248961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.248970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.249292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.249302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.249680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.249689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.250117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.250131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.250446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.250456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.250861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.250871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.251257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.251267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.251564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.251574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.392 [2024-07-15 21:05:28.251964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.392 [2024-07-15 21:05:28.251974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.392 qpair failed and we were unable to recover it. 00:29:24.393 [2024-07-15 21:05:28.252251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.393 [2024-07-15 21:05:28.252261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.393 qpair failed and we were unable to recover it. 00:29:24.393 [2024-07-15 21:05:28.252749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.252758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.253135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.253147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.253530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.253539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.253824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.253834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.254244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.254254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.254672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.254682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.255066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.255077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.255493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.255503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.255797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.255807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.256124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.256134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.256528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.256538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.256974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.256984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.257529] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.257565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.257783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.257797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.258235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.258246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.258665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.259105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.259115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.259528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.259538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.259958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.259968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.260491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.260528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.260953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.260965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.261382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.261419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.261857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.261869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.262344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.262382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.262832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.262844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.263342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.263379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.263799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.263811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.264196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.665 [2024-07-15 21:05:28.264206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.665 qpair failed and we were unable to recover it. 00:29:24.665 [2024-07-15 21:05:28.264426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.264442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.264868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.264878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.265187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.265197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.265621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.265631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.265923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.265932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.266360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.266370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.266669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.266678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.267088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.267391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.267407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.267818] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.267827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.268214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.268223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.268644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.268653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.269072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.269081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.269551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.269561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.269935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.269945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.270274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.270284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.270687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.270696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.271034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.271043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.271568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.271577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.271869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.271878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.272251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.272260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.272691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.273089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.273099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.273427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.273437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.273840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.273850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.274279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.274288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.274692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.274702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.274900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.274911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.275370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.275379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.275764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.275773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.276202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.276212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.276620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.276629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.277042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.277052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.277473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.277483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.277944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.277953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.278167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.278179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.278602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.278612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.279033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.279042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.279443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.279454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.279870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.279879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.666 qpair failed and we were unable to recover it. 00:29:24.666 [2024-07-15 21:05:28.280240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.666 [2024-07-15 21:05:28.280250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.280671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.280680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.281075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.281084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.281478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.281489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.281895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.281905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.282365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.282375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.282801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.282810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.283019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.283030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.283452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.283462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.283931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.283940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.284433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.284471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.284888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.284900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.285481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.285519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.285942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.285955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.286389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.286430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.286876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.286888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.287406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.287443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.287835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.287846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.288361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.288399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.288747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.288759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.289247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.289257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.289654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.289664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.290061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.290070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.290536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.290546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.290894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.290903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.291189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.291199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.291602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.291611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.291996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.292006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.292439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.292449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.292862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.292872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.293234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.293244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.293662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.293671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.294084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.294093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.294331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.294341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.294741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.294751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.295166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.295177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.295597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.295607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.295983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.295992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.296393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.296402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.296782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.296791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.297171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.667 [2024-07-15 21:05:28.297181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.667 qpair failed and we were unable to recover it. 00:29:24.667 [2024-07-15 21:05:28.297735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.297747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.298213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.298223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.298621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.298630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.299082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.299092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.299498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.299507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.299805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.299815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.300012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.300022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.300400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.300410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.300787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.300797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.301080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.301090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.301475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.301485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.301921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.301931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.302223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.302233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.302644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.302653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.302964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.302975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.303387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.303396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.303736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.303745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.304155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.304165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.304398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.304413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.304715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.304725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.305156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.305167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.305672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.305682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.305873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.305882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.306290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.306301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.306673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.306682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.307053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.307062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.307396] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.307406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.307798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.307807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.308315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.308325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.308729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.308739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.309102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.309112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.309557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.309567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.309938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.309948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.310438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.310476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.310908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.310920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.311364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.311400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.311810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.311822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.312238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.312248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.312673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.312682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.312975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.312985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.313363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.668 [2024-07-15 21:05:28.313373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.668 qpair failed and we were unable to recover it. 00:29:24.668 [2024-07-15 21:05:28.313756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.313769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.314150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.314161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.314483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.314493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.314930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.314940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.315379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.315389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.315797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.315806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.316242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.316252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.316631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.316641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.317050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.317060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.317438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.317447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.317836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.317846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.318249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.318259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.318672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.318681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.319001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.319010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.319404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.319414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.319836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.319845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.320236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.320245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.320624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.320634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.321044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.321053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.321504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.321514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.321908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.321918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.322328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.322338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.322640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.322650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.323074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.323083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.323314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.323324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.323729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.323739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.324134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.324144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.324562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.324573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.324997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.325007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.325404] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.325413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.325831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.325841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.326246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.326255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.326669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.326678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.327110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.669 [2024-07-15 21:05:28.327120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.669 qpair failed and we were unable to recover it. 00:29:24.669 [2024-07-15 21:05:28.327522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.327532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.327917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.327926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.328426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.328463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.328899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.328911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.329423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.329460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.329898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.329910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.330452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.330489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.330929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.330941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.331427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.331464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.331920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.331933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.332457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.332494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.332831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.332843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.333312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.333762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.334155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.334166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.334623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.334633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.335022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.335031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.335243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.335673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.335683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.336087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.336096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.336513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.336527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.336933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.336942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.337345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.337354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.337764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.337774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.338200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.338210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.338582] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.338592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.338841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.338852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.339257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.339267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.339649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.339658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.340055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.340064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.340465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.340475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.340861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.341254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.341264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.341650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.341659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.341963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.341973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.342364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.342373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.342809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.342818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.343199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.343208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.343608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.343617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.344037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.344046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.344450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.670 [2024-07-15 21:05:28.344460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.670 qpair failed and we were unable to recover it. 00:29:24.670 [2024-07-15 21:05:28.344878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.344888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.345267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.345631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.345645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.346029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.346039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.346433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.346443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.346824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.347236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.347245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.347684] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.347693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.348083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.348093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.348455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.348465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.348890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.348899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.349324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.349334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.349747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.349757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.350027] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.350037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.350337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.350347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.350775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.350784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.351164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.351174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.351601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.351610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.352062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.352072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.352467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.352476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.352899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.352908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.353289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.353299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.353798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.353808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.354211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.354221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.354523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.354532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.354935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.354944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.355323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.355333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.355757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.355766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.356190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.356200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.356584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.356593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.356897] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.356907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.357321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.357331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.357712] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.357722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.358111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.358120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.358543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.358552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.358978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.358988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.359523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.359560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.359967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.359979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.360490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.360527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.360903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.360915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.361445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.671 [2024-07-15 21:05:28.361482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.671 qpair failed and we were unable to recover it. 00:29:24.671 [2024-07-15 21:05:28.361892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.361904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.362383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.362420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.362849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.362861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.363375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.363411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.363847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.363859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.364366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.364403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.364764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.364781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.365198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.365209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.365606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.365616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.365995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.366005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.366388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.366398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.366801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.366811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.367029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.367043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.367472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.367482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.367862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.367871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.368257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.368267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.368669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.368678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.369082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.369092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.369477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.369487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.369868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.369877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.370256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.370266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.370656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.370666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.371084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.371093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.371493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.371503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.371880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.371890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.372287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.372296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.372714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.372723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.373038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.373047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.373458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.373468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.373848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.373857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.374236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.374246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.374548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.374557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.374883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.374892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.375297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.375311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.375686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.375696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.376083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.376092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.376490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.376499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.376898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.376907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.377288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.377298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.377695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.672 [2024-07-15 21:05:28.377705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.672 qpair failed and we were unable to recover it. 00:29:24.672 [2024-07-15 21:05:28.378023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.378033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.378440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.378450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.378850] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.378860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.379272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.379281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.379598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.379608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.380054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.380064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.380352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.380361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.380787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.380797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.381288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.381298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.381686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.382077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.382086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.382476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.382485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.382909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.383342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.383352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.383747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.383757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.384160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.384171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.384596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.384606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.385009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.385018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.385468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.385477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.385859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.385868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.386320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.386331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.386753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.387144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.387154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.387579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.387588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.387917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.387927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.388334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.388344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.388736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.388745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.389040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.389051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.389450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.389460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.389843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.389852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.390061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.390073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.390472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.390482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.390861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.390871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.391271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.391281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.391685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.391694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.392135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.392145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.673 qpair failed and we were unable to recover it. 00:29:24.673 [2024-07-15 21:05:28.392560] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.673 [2024-07-15 21:05:28.392570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.392885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.392894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.393310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.393319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.393741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.393750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.394029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.394039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.394461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.394471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.394855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.394865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.395329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.395339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.395728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.395737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.396112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.396130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.396534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.396544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.396939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.396949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.397488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.397525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.397972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.397984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.398480] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.398517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.398965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.398976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.399530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.399567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.400011] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.400023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.400556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.400594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.401048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.401060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.401514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.401525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.401800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.401811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.402212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.402223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.402628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.402637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.402931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.402940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.403347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.403361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.403784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.403793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.404108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.404118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.404526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.404536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.404915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.404925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.405452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.405489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.405929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.405941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.406452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.406489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.406697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.406710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.407135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.407146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.407564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.407574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.407959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.407968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.408475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.408512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.408943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.408955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.409466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.674 [2024-07-15 21:05:28.409504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.674 qpair failed and we were unable to recover it. 00:29:24.674 [2024-07-15 21:05:28.409950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.409962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.410461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.410498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.410928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.410940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.411437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.411473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.411912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.411924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.412421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.412458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.412886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.412898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.413382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.413419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.413898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.413910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.414387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.414424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.414849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.414861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.415335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.415372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.415808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.415824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.416260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.416270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.416722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.416732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.417217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.417227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.417612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.417622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.418033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.418043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.418343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.418353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.418763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.418772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.419191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.419202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.419597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.419607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.419988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.419998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.420397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.420407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.420809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.420818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.421220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.421230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.421667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.421676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.422101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.422110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.422494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.422505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.422717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.422726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.423144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.423155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.423571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.423581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.423956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.423966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.424358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.424368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.424791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.424800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.425203] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.425213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.425669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.425679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.426047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.426056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.426455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.426465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.426841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.426853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.675 [2024-07-15 21:05:28.427240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.675 [2024-07-15 21:05:28.427251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.675 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.427574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.427583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.427993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.428002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.428250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.428260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.428672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.429103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.429510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.429519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.429924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.429933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.430339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.430348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.430649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.430661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.431084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.431093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.431525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.431535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.431907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.431917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.432411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.432448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.432866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.432878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.433382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.433420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.433860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.433872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.434174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.434185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.434598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.434608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.435008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.435017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.435433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.435444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.435843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.435853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.436277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.436600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.436609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.436839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.436853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.437258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.437268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.437666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.437676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.438094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.438105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.438533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.438543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.438919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.438928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.439311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.439321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.439628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.439637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.440057] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.440066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.440462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.440472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.440891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.440901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.441286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.441296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.441718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.441727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.442103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.442113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.442524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.442534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.442931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.442942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.443434] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.443471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.443911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.676 [2024-07-15 21:05:28.443924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.676 qpair failed and we were unable to recover it. 00:29:24.676 [2024-07-15 21:05:28.444420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.444457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.444878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.444890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.445409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.445445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.445880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.445892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.446379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.446857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.446869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.447359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.447396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.447830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.447842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.448227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.448237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.448658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.448667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.449069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.449079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.449482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.449491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.449876] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.449886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.450304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.450314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.450694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.450703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.451107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.451117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.451571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.451581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.451978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.451988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.452474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.452511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.452989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.453001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.453401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.453412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.453800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.453810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.454130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.454140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.454414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.454423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.454857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.454866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.455246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.455261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.455662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.455673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.455980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.455989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.456412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.456422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.456800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.456810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.457340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.457887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.457899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.458331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.458368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.458808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.458820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.459034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.459048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.459261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.459272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.459685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.459694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.460078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.460087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.460478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.460488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.460872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.460882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.461268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.677 [2024-07-15 21:05:28.461278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.677 qpair failed and we were unable to recover it. 00:29:24.677 [2024-07-15 21:05:28.461693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.461703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.462105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.462114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.462494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.462504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.462924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.462934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.463455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.463492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.463928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.463940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.464359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.464395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.464831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.464843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.465373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.465410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.465702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.465716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.466128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.466139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.466424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.466438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.466848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.466858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.467286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.467295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.467704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.467713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.468105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.468114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.468410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.468420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.468825] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.468834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.469254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.469265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.469651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.469660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.470039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.470048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.470505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.470516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.470901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.470911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.471337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.471347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.471723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.471732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.472110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.472120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.472512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.472522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.472908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.472917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.473438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.473475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.473917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.473929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.474374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.474411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.474841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.474853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.475345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.475383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.475832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.475844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.476357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.678 [2024-07-15 21:05:28.476394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.678 qpair failed and we were unable to recover it. 00:29:24.678 [2024-07-15 21:05:28.476830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.476842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.477254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.477265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.477477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.477492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.477816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.477830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.478252] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.478262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.478646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.478656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.479133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.479143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.479545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.479555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.479947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.479957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.480364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.480373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.480754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.480763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.481142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.481153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.481580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.481590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.481968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.481978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.482360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.482370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.482683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.482693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.483111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.483120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.483512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.483522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.483902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.483911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.484438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.484474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.484901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.484913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.485431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.485468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.485950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.485962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.486444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.486481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.486908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.486920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.487339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.487350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.487775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.487785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.488315] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.488351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.488686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.488698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.489103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.489113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.489515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.489525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.489904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.489914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.490414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.490452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.490862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.490874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.491362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.491398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.491834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.491847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.492280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.492290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.492671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.492681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.492973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.492989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.493392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.493402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.493824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.679 [2024-07-15 21:05:28.493833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.679 qpair failed and we were unable to recover it. 00:29:24.679 [2024-07-15 21:05:28.494338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.494375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.494814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.494826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.495237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.495247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.495642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.495656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.496061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.496071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.496477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.496487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.496913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.496922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.497435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.497472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.497916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.497928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.498488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.498525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.498762] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.498773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.499182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.499202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.499580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.499590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.499969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.499979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.500392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.500401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.500785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.500794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.501172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.501182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.501608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.501618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.501999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.502008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.502467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.502477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.502782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.502791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.503204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.503634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.503643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.504051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.504061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.504469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.504479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.504905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.504915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.505313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.505323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.505750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.505759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.506243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.506254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.506573] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.506583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.506891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.506904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.507280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.507290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.507598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.507608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.507921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.508224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.508234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.508659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.508668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.509045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.509055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.509463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.509473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.509851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.509860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.510261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.680 [2024-07-15 21:05:28.510271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.680 qpair failed and we were unable to recover it. 00:29:24.680 [2024-07-15 21:05:28.510659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.510668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.511053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.511063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.511453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.511463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.511834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.511941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.511955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.512267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.512278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.512683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.512692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.512899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.512910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.513320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.513330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.513732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.513741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.514162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.514172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.514564] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.514573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.514953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.514962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.515352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.515361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.515778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.515787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.516171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.516180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.516501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.516511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.516787] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.516800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.517219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.517229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.517530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.517540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.517960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.517969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.518378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.518387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.518767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.518777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.519155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.519165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.519610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.519620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.519999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.520008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.520477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.520487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.520805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.520814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.521239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.521250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.521652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.521661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.522063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.522073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.522478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.522488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.522872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.522881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.523260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.523270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.523683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.523692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.524076] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.524086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.524507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.524516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.524939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.524949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.525443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.525480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.525912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.525924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.526456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.681 [2024-07-15 21:05:28.526493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.681 qpair failed and we were unable to recover it. 00:29:24.681 [2024-07-15 21:05:28.526940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.526952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.527453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.527490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.527837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.527849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.528380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.528417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.528854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.528866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.529398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.529435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.529745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.529758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.530141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.530152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.530464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.530473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.530868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.530878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.531258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.531268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.531657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.531667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.531969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.531978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.532360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.532370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.532772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.532782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.533208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.533217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.533623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.533633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.533929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.533939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.534333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.534343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.534769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.534779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.535187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.535197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.535615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.535624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.536026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.536035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.536421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.536431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.536869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.536878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.537379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.537388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.537767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.537776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.538178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.538188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.538590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.538600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.538978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.538987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.539367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.539377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.539839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.539849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.540348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.540385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.540814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.540825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.541206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.541216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.682 [2024-07-15 21:05:28.541646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.682 [2024-07-15 21:05:28.541655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.682 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.542061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.542071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.542286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.542301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.542703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.542713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.543017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.543026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.543345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.543355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.543815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.543825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.544202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.544212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.544639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.544649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.545040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.545056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.545454] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.545463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.683 [2024-07-15 21:05:28.545768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.683 [2024-07-15 21:05:28.545778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.683 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.546208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.546219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.546676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.547061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.547072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.547477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.547487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.547869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.547879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.548284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.548294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.548708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.548718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.549106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.549116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.549535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.549544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.549921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.549930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.550467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.550504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.550854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.550867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.551399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.551436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.551874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.551886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.552312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.552349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.552777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.552789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.553180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.553191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.553517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.553527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.553934] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.553943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.554372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.554382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.554803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.554812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.555191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.555201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.555659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.555669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.556048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.556058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.955 qpair failed and we were unable to recover it. 00:29:24.955 [2024-07-15 21:05:28.556456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.955 [2024-07-15 21:05:28.556470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.556874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.556884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.557286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.557296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.557700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.557709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.558099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.558109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.558533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.558543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.558921] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.558931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.559463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.559500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.559931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.559943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.560426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.560463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.560904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.560916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.561414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.561452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.561884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.561896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.562438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.562475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.562911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.562923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.563405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.563442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.563870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.563882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.564360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.564397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.564801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.564813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.565194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.565204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.565595] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.565605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.566009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.566019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.566431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.566441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.566821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.566830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.567214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.567224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.567530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.567540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.567851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.568269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.568279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.568708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.568718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.569094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.569104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.569493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.569502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.569887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.569897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.570320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.570330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.570544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.570559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.570977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.570987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.571411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.571421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.571807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.571817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.572215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.572225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.572500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.572510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.572918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.573312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.573321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.573729] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.573740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.574168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.574577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.574586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.574967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.574977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.575358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.575368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.575789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.575798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.576176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.576186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.576587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.576973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.576982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.577416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.577426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.577882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.577892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.578406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.578443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.578881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.578893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.956 [2024-07-15 21:05:28.579376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.956 [2024-07-15 21:05:28.579413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.956 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.579851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.579863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.580356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.580392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.580727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.580739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.581156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.581166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.581570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.581580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.581963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.582353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.582363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.582770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.582779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.583157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.583167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.583616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.583625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.583940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.583950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.584343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.584353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.584758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.584767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.585150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.585164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.585661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.585671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.586051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.586060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.586274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.586289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.586720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.586730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.587117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.587132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.587431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.587440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.587867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.587876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.588261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.588271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.588651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.588662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.589068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.589078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.589504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.589514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.589903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.589912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.590294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.590303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.590690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.590701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.591083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.591093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.591499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.591509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.591889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.591899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.592385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.592422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.592719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.592731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.593149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.593559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.593568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.593875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.593885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.594301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.594311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.594704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.594714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.595093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.595102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.595483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.595493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.595872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.595885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.596269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.596279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.596700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.596709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.597008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.597018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.597427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.597437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.597820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.597829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.598206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.598216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.598597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.598606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.599003] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.599013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.599453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.599462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.599861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.599870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.600249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.600259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.600680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.600690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.601094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.601103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.957 qpair failed and we were unable to recover it. 00:29:24.957 [2024-07-15 21:05:28.601588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.957 [2024-07-15 21:05:28.601598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.602000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.602010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.602322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.602332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.602740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.602749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.603176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.603186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.603607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.603617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.603917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.603927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.604229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.604239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.604624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.604634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.605036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.605046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.605458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.605468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.605849] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.605859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.606338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.606347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.606725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.606736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.607117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.607129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.607545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.607555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.607983] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.607992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.608470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.608508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.608917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.608929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.609436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.609472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.609911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.609923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.610394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.610431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.610875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.610887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.611402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.611439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.611884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.611895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.612370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.612406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.612845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.612857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.613356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.613394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.613815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.613828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.614239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.614249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.614576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.614586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.614877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.614886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.615179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.615189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.615627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.615636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.616041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.616050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.616444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.616454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.616833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.617220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.617229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.617658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.617667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.617967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.617977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.618367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.618377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.618755] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.618765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.619145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.619156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.619665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.619675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.620060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.620069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.620453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.620463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.620841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.620851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.958 [2024-07-15 21:05:28.621277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.958 [2024-07-15 21:05:28.621286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.958 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.621665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.621674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.622061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.622070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.622367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.622376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.622785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.622794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.623189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.623199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.623578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.623587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.623968] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.623977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.624367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.624378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.624780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.624790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.625133] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.625143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.625549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.625559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.625942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.626267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.626277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.626647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.626656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.627032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.627042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.627247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.627257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.627646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.627657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.628085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.628095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.628409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.628419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.628831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.628841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.629222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.629231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.629635] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.629644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.630066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.630076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.630471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.630481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.630790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.630800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.631179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.631189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.631601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.631610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.631993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.632002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.632389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.632400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.632816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.632827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.633210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.633220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.633625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.633634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.634015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.634025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.634410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.634424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.634804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.634814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.635271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.635281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.635742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.635751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.636134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.636144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.636532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.636542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.636925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.636936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.637365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.637375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.637766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.637777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.638074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.638083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.638377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.638389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.638794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.638804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.639082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.639092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.639481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.639492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.639872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.639882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.640262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.640272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.640551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.640560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.640988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.641391] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.641401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.641797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.641807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.642238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.642248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.959 [2024-07-15 21:05:28.642656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.959 [2024-07-15 21:05:28.642666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.959 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.643051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.643062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.643458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.643467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.643764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.643775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.644167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.644178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.644472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.644482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.644889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.644901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.645253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.645263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.645665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.646063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.646073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.646505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.646515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.646901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.646910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.647298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.647308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.647689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.647698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.648080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.648090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.648305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.648315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.648704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.648713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.649106] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.649116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.649555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.649565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.649879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.649889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.650291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.650302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.650690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.650699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.651082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.651091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.651482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.651491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.651881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.651890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.652364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.652401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.652701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.652713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.653092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.653102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.653513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.653524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.653965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.653974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.654504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.654540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397220 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.654618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4f20 is same with the state(5) to be set 00:29:24.960 [2024-07-15 21:05:28.655351] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0454000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Write completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 Read completed with error (sct=0, sc=8) 00:29:24.960 starting I/O failed 00:29:24.960 [2024-07-15 21:05:28.655670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.960 [2024-07-15 21:05:28.656085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.656095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.656398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.656406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.656727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.656734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.657158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.657166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.657558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.657565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.657960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.657966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.658281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.658288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.658710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.658719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.659102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.659109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.659554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.659561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.659945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.659951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.660333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.960 [2024-07-15 21:05:28.660340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.960 qpair failed and we were unable to recover it. 00:29:24.960 [2024-07-15 21:05:28.660723] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.660729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.661110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.661116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.661508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.661515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.661900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.661907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.662422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.662449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.662851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.662860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.663366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.663393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.663870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.663878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.664381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.664409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.664813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.664821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.665222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.665229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.665526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.665532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.665908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.665915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.666291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.666298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.666693] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.666700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.667082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.667088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.667475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.667483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.667895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.667902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.668422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.668449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.668846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.668854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.669239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.669247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.669634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.669641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.669847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.670242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.670249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.670639] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.670646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.671036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.671043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.671487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.671494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.671916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.671923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.672306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.672313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.672694] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.672701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.673084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.673090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.673482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.673488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.673868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.673875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.674425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.674452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.674880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.674888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.675380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.675408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.675713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.675721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.676103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.676110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.676402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.676409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.676808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.676815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.677238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.677245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.677630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.677637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.678040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.678046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.678443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.678450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.678840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.678847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.679227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.679233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.679535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.679542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.679969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.679976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.680379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.680386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.680796] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.680802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.681183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.681190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.961 qpair failed and we were unable to recover it. 00:29:24.961 [2024-07-15 21:05:28.681570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.961 [2024-07-15 21:05:28.681577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.681962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.681969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.682349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.682355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.682661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.682668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.683132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.683139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.683525] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.683532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.683959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.683966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.684271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.684277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.684663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.684670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.685050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.685056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.685531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.685537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.685914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.685922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.686301] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.686308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.686699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.686705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.687010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.687017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.687438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.687446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.687873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.687880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.688283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.688289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.688671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.688678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.689062] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.689068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.689371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.689378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.689767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.689774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.690156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.690163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.690535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.690541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.690841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.690848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.691281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.691288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.691602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.691610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.691995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.692002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.692398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.692404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.692788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.692794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.693174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.693180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.693610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.693616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.694004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.694011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.694322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.694328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.694754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.694760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.695090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.695096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.695506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.695514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.695940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.696353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.696360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.696784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.696792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.697337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.697364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.697836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.697845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.698230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.698240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1771880 Killed "${NVMF_APP[@]}" "$@" 00:29:24.962 [2024-07-15 21:05:28.698689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.698696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.699098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.699105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:24.962 [2024-07-15 21:05:28.699495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.699503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:24.962 [2024-07-15 21:05:28.699900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.699908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.962 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.962 [2024-07-15 21:05:28.700426] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.700455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.962 [2024-07-15 21:05:28.700870] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.700879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.701352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.962 [2024-07-15 21:05:28.701379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.962 qpair failed and we were unable to recover it. 00:29:24.962 [2024-07-15 21:05:28.701778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.701787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.702210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.702218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.702608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.702614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.703025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.703031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.703443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.703450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.703833] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.703839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.704224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.704231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.704644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.704651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.705037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.705045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.705253] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.705264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.705730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.705737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.706132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.706140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.706526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.706534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.706937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.706944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.707318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.707325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.707716] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.707724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1772910 00:29:24.963 [2024-07-15 21:05:28.708132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.708140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1772910 00:29:24.963 [2024-07-15 21:05:28.708554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.708561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1772910 ']' 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.963 [2024-07-15 21:05:28.708955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.708963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.963 [2024-07-15 21:05:28.709400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.709408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.963 [2024-07-15 21:05:28.709828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.709835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 21:05:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.963 [2024-07-15 21:05:28.710246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.710254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.710661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.710668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.710841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.710849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.711288] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.711296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.711722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.711729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.712171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.712179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.712590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.712597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.712905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.712913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.713320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.713328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.713735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.713742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.714131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.714139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.714542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.714549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.714749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.714758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.715178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.715188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.715555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.715562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.715969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.715976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.716378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.716385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.716765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.716772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.717177] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.717184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.717631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.717638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.718025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.718031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.718437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.718444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.718846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.718853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.719245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.719252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.719662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.719669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.720070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.720077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.720179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.720187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.720562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.720569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.720947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.720953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.963 [2024-07-15 21:05:28.721368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.963 [2024-07-15 21:05:28.721375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.963 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.721815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.721822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.722119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.722130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.722520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.722527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.722907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.722914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.723341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.723368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.723756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.723764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.724160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.724168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.724602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.724609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.724990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.724997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.725402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.725409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.725708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.725716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.725970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.725977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.726374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.726382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.726765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.726771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.727166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.727173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.727593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.727599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.728067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.728074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.728336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.728343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.728730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.728737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.729115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.729126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.729522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.729529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.729925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.729932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.730380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.730408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.730866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.730877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.731367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.731394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.731797] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.731806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.732215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.732224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.732528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.732535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.732964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.732970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.733379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.733386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.733663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.733671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.734049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.734056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.734476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.734483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.734873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.734879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.735274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.735281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.735661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.735669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.736083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.736090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.736270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.736278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.736669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.736675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.737007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.737014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.737428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.737434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.737821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.737827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.738128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.738135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.738532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.738539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.738926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.738932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.739348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.739376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.739786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.739794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.740118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.740131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.740533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.740540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.740924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.740932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.741360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.741387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.741692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.741701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.741912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.741920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.742322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.964 [2024-07-15 21:05:28.742329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.964 qpair failed and we were unable to recover it. 00:29:24.964 [2024-07-15 21:05:28.742734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.742740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.743068] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.743075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.743495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.743502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.743957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.743963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.744483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.744510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.744894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.744902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.745429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.745456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.745931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.745939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.746468] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.746907] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.746919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.747443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.747904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.747913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.748347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.748374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.748782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.748791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.749224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.749232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.749703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.749710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.750090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.750097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.750417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.750830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.750837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.751359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.751386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.751823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.751832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.752236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.752244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.752675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.752682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.753080] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.753087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.753299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.753306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.753555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.753563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.753989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.753996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.754383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.754390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.754601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.754608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.755028] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.755035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.755438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.755446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.755862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.755869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.756291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.756298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.756721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.756727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.757070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.757078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.757495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.757503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.757899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.757906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.758440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.758467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.758574] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:29:24.965 [2024-07-15 21:05:28.758617] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.965 [2024-07-15 21:05:28.758913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.758922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.759407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.759435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.759854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.759864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.760385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.760412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.760830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.760839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.761250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.761258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.761704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.761711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.762127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.762135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.762609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.762617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.763013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.965 [2024-07-15 21:05:28.763020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.965 qpair failed and we were unable to recover it. 00:29:24.965 [2024-07-15 21:05:28.763461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.763472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.763882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.763890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.764424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.764453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.764885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.764894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.765441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.765469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.765943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.765951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.766380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.766408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.766829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.766838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.767112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.767120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.767528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.767535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.767927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.767934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.768460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.768488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.768899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.768908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.769401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.769428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.769880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.769890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.770407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.770435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.770852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.770860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.771380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.771408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.771820] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.771828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.772096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.772105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.772587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.772595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.772852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.772858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.773355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.773383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.773764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.773772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.774166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.774173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.774600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.774606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.775091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.775097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.775563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.775570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.775956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.775962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.776458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.776485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.776899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.776908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.777395] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.777423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.777856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.777864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.778370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.778397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.778808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.778816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.779317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.779345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.779753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.779761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.780144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.780151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.780603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.780610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.781000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.781006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.781441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.781451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.781658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.781665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.782131] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.782138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.782528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.782535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.782828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.782842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.783249] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.783256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.783673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.783679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.783981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.783988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.784437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.784444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.784855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.784862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.785370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.785399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.785670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.785678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.786103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.786109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.786423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.786430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.966 qpair failed and we were unable to recover it. 00:29:24.966 [2024-07-15 21:05:28.786823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.966 [2024-07-15 21:05:28.786830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.787218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.787226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.787528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.787536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.787925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.787932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.788276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.788284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.788679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.788687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.789118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.789130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.789545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.789553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.789782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.789789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.967 [2024-07-15 21:05:28.790179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.790186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.790622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.790628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.791014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.791021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.791282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.791289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.791710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.791716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.792103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.792110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.792322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.792333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.792754] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.792762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.793066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.793074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.793576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.793584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.793962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.793969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.794234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.794240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.794624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.794631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.794834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.794840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.795156] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.795163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.795574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.795979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.795986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.796195] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.796204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.796536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.796542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.796951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.796957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.797366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.797373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.797777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.797783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.798063] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.798070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.798424] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.798431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.798837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.798844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.799229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.799235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.799644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.799650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.800035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.800042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.800449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.800455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.800845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.800851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.801235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.801244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.801632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.801638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.802020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.802028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.802513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.802521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.802916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.802923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.803336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.803343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.803732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.803738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.804123] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.804130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.804460] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.804467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.804901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.804907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.805386] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.805414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.805644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.805652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.806061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.806068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.806459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.806467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.806786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.806794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.807262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.967 [2024-07-15 21:05:28.807269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.967 qpair failed and we were unable to recover it. 00:29:24.967 [2024-07-15 21:05:28.807666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.807673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.808070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.808076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.808457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.808464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.808892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.808900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.809210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.809217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.809634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.809641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.809945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.809951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.810350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.810728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.810735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.811125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.811133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.811429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.811841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.811848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.812352] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.812379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.812789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.812798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.813241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.813249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.813738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.814136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.814143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.814531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.814538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.814803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.814810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.815217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.815224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.815657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.815664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.816055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.816062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.816475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.816482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.816777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.816784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.817091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.817101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.817475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.817482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.817745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.817751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.818134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.818141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.818557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.818564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.818954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.818961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.819360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.819367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.819759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.819765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.820163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.820170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.820565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.820571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.820951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.820959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.821390] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.821398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.821616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.821627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.822048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.822056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.822449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.822456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.822841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.822847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.823275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.823282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.823545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.823552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.823942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.823949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.824248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.824255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.824672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.824678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.824992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.824998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.825475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.825482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.825798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.825805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.826214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.968 [2024-07-15 21:05:28.826221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.968 qpair failed and we were unable to recover it. 00:29:24.968 [2024-07-15 21:05:28.826608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.826614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.827001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.827008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.827410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.827417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.827805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.827811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.828204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.828210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.828543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.828550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.828759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.828765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.829166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.829172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.829570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.829577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.829966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.829972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.830375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.830383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.830608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.830615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.831029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.831036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.831240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.831249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.831643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.831649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.832035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.832043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.832329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.832336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.832728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.832942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.832949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.833389] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.833396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.833785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.833791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.834183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.834190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.834580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.834587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.834964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.834970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.835358] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.969 [2024-07-15 21:05:28.835365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:24.969 qpair failed and we were unable to recover it. 00:29:24.969 [2024-07-15 21:05:28.835753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.835760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.836172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.836181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.836397] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.836405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.836861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.836868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.837263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.837269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.837656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.837662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.838052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.838059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.838285] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.838292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.838658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.838664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.839093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.839100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.839484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.839491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.839980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.839987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.840360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.840367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.840702] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.840708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.840956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.840964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.841372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.841379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.234 [2024-07-15 21:05:28.841768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.234 [2024-07-15 21:05:28.841775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.234 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.842085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.842186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.235 [2024-07-15 21:05:28.842303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.842310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.842745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.842751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.843143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.843151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.843567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.843573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.843965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.843971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.844363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.844370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.844763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.844770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.845182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.845189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.845475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.845482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.845913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.845920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.846306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.846313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.846700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.846706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.847093] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.847102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.847504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.847511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.847917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.847924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.848455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.848482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.848895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.848903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.849423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.849451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.849908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.849916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.850407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.850435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.850861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.850869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.851347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.851374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.851783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.851791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.852241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.852249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.852683] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.852690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.852906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.852915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.853350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.853358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.853801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.853808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.854234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.854242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.854551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.854558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.854958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.854965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.855181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.855188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.855558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.855565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.855998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.856005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.856443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.856450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.856843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.856850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.857362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.857389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.857790] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.857798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.858223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.858231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.858555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.858563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.858859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.858865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.859257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.859264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.859655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.859661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.860054] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.860062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.860470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.860476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.860858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.860866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.861171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.861178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.861704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.861712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.862070] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.862077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.862400] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.862407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.862812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.863242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.863249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.863685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.863694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.864073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.864080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.864471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.864478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.235 [2024-07-15 21:05:28.864730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.235 [2024-07-15 21:05:28.864736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.235 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.865113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.865119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.865544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.865551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.865973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.865980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.866495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.866523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.866967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.866975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.867477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.867505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.867909] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.867917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.868307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.868334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.868736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.868745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.869329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.869357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.869759] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.869767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.870170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.870178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.870600] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.870607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.870917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.870924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.871151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.871158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.871536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.871543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.871931] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.871937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.872356] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.872363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.872666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.872673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.873065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.873071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.873475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.873482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.873950] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.873956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.874267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.874274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.874671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.874678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.875047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.875054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.875436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.875443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.875834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.875841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.876042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.876053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.876470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.876477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.876808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.876814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.877201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.877209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.877609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.877616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.877999] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.878292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.878300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.878709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.878716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.879118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.879128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.879526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.879535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.879964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.879971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.880466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.880494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.880905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.880913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.881512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.881539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.881857] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.881866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.882088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.882095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.882493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.882500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.882935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.882942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.883408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.883435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.883835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.883844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.884371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.884399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.884802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.884811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.885201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.885208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.885478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.885485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.885888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.885894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.886108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.886115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.886535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.886542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.886840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.886848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.887271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.887278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.887697] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.236 [2024-07-15 21:05:28.887704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.236 qpair failed and we were unable to recover it. 00:29:25.236 [2024-07-15 21:05:28.888090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.888097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.888526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.888534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.888887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.888894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.889407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.889435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.889650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.889659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.890036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.890044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.890471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.890895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.890902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.891296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.891304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.891744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.891751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.892141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.892148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.892598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.892605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.893000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.893007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.893405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.893412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.893744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.893751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.894022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.894029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.894439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.894446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.894904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.894910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.895291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.895298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.895706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.895714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.896141] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.896148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.896482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.896489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.896887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.897274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.897281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.897765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.897771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.898164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.898170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.898575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.898581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.898990] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.898997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.899445] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.899452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.899891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.899898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.900393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.900421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.900656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.900665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.901091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.901099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.901531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.901539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.901940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.901946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.902430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.902458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.902707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.902716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.903245] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.903252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.903648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.903654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.903964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.903971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.904376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.904382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.904792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.904799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.905115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.905125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.905513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.905521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.905901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.905907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.905987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.905996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.906373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.906380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.906741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.907165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.907171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.907603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.907609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.907991] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.237 [2024-07-15 21:05:28.907998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.237 qpair failed and we were unable to recover it. 00:29:25.237 [2024-07-15 21:05:28.908371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.237 [2024-07-15 21:05:28.908397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.238 [2024-07-15 21:05:28.908406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.238 [2024-07-15 21:05:28.908412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.238 [2024-07-15 21:05:28.908418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.238 [2024-07-15 21:05:28.908477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.908484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.908580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.238 [2024-07-15 21:05:28.908853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.908860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.908895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.238 [2024-07-15 21:05:28.909025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.238 [2024-07-15 21:05:28.909026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:25.238 [2024-07-15 21:05:28.909361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.909387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.909710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.909718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.910139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.910147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.910562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.910569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.910965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.910972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.911278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.911286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.911574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.911581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.911923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.911931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.912139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.912150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.912557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.912564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.912995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.913002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.913493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.913520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.913930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.913939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.914431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.914459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.914872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.914880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.915097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.915104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.915502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.915512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.915901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.915908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.916410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.916437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.916836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.916844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.917032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.917039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.917337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.917344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.917764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.917770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.918166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.918584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.918591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.918888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.918894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.919193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.919200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.919401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.919413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.919858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.919865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.920292] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.920299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.920724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.920731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.921029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.921037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.921469] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.921476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.921916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.921923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.922355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.922362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.922753] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.922760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.923150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.923158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.923442] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.923448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.923853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.923859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.924196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.924203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.924524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.924531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.924941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.924947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.925343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.925350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.925735] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.925742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.926081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.926087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.926477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.926484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.926878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.926884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.927268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.927275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.238 [2024-07-15 21:05:28.927670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.238 [2024-07-15 21:05:28.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.238 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.928065] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.928493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.928501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.928709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.928718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.929137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.929145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.929440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.929447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.929840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.929847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.930139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.930146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.930466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.930475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.930886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.930893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.931282] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.931289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.931678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.931685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.931944] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.931951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.932337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.932343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.932728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.932736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.932947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.932954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.933231] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.933238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.933501] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.933508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.933941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.933949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.934419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.934426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.934737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.934743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.934946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.934954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.935262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.935270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.935714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.935722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.936117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.936128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.936520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.936526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.936916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.936923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.937017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.937024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.937220] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.937227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.937643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.937650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.938049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.938056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.938323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.938331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.938744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.938750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.939143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.939558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.939565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.939947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.939953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.940219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.940226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.940727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.940733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.941116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.941126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.941556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.941828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.941835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.942228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.942236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.942648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.942654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.942958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.942965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.943237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.943244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.943737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.943744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.944152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.944159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.944565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.944572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.944908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.945335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.945343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.945747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.945754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.946159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.946167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.946558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.946566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.946839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.239 [2024-07-15 21:05:28.946846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.239 qpair failed and we were unable to recover it. 00:29:25.239 [2024-07-15 21:05:28.947277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.947285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.947671] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.947679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.947966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.947973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.948272] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.948280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.948576] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.948583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.948969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.948976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.949394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.949402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.949792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.949800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.950190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.950198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.950621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.950629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.951021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.951028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.951437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.951444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.951752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.951760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.952227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.952234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.952616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.952624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.953030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.953037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.953233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.953242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.953667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.953673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.954059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.954066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.954503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.954510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.954905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.954911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.955228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.955237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.955704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.955711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.956015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.956022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.956212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.956219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.956628] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.956636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.957045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.957052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.957471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.957477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.957745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.957752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.958183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.958190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.958502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.958509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.958806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.958813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.959015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.959021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.959216] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.959223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.959507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.959883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.959889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.960280] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.960287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.960676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.960683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.961085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.961092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.961550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.961557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.962010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.962016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.962486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.962492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.962710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.962717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.963155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.963162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.963586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.963592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.963988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.963995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.964345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.964352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.964834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.964840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.965162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.240 [2024-07-15 21:05:28.965169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.240 qpair failed and we were unable to recover it. 00:29:25.240 [2024-07-15 21:05:28.965379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.965386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.965623] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.965629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.965992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.965998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.966429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.966436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.966741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.967172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.967179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.967592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.967598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.968012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.968018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.968423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.968430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.968828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.969276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.969283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.969801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.969808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.970210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.970218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.970675] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.970682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.970882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.970888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.971116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.971128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.971531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.971538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.971764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.971770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.972072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.972079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.972269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.972276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.972580] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.972587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.972984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.972990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.973449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.973455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.973657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.973663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.973956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.973962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.974471] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.974478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.974890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.974896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.975402] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.975431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.975883] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.975892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.976121] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.976135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.976538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.976545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.977022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.977029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.977594] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.977622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.978091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.978099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.978629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.978657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.979066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.979075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.979408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.979434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.979882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.979891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.980368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.980395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.980863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.980872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.981081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.981089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.981291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.981298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.981718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.981724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.982107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.982114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.982428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.982435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.982879] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.982885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.983362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.983390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.983802] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.983810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.984206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.984452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.984459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.984888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.984894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.985120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.985132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.985596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.985606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.985993] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.986000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.986405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.986432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.986843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.986851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.987372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.241 [2024-07-15 21:05:28.987399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.241 qpair failed and we were unable to recover it. 00:29:25.241 [2024-07-15 21:05:28.987834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.987842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.988321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.988348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.988584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.988592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.989005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.989012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.989316] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.989323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.989709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.989716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.990107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.990114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.990507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.990514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.990811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.990818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.991225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.991233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.991673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.991680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.992078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.992085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.992500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.992508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.992918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.992925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.993467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.993495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.993945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.993953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.994431] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.994459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.994869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.994877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.995407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.995434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.995839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.995847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.996328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.996356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.996791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.996800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.997107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.997114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.997583] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.997590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.997974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.997982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.998484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.998731] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.998739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.999168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.999176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:28.999577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:28.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.000021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.000028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.000446] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.000454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.000840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.000848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.001164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.001172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.001360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.001369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.001774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.002090] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.002101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.002533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.002541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.002975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.002981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.003368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.003375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.003764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.003770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.004081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.004088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.004496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.004504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.004936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.004943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.005435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.005463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.005670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.005679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.005910] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.005917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.006182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.006189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.006589] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.006595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.006995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.007001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.007218] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.007238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.007527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.007534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.007938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.007946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.008350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.008357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.008744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.008751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.009014] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.009020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.009415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.009422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.009806] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.009813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.010199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.010206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.010416] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.010425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.010854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.242 [2024-07-15 21:05:29.010860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.242 qpair failed and we were unable to recover it. 00:29:25.242 [2024-07-15 21:05:29.011291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.011298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.011612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.011619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.011828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.011835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.012269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.012276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.012662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.012668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.012929] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.012936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.013199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.013206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.013599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.013606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.013992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.013999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.014432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.014439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.014832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.014838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.015036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.015049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.015465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.015472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.015578] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.015584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.015847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.015854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.016111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.016121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.016527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.016533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.016919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.016927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.017345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.017352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.017793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.017801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.018197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.018204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.018609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.018616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.018946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.018954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.019230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.019237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.019621] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.019628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.020058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.020065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.020478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.020486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.020654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.020660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.021120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.021130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.021546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.021555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.021949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.021956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.022447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.022474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.022886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.022894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.023305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.023332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.023809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.023818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.024021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.024028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.024453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.024460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.024761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.024767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.025149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.025156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.025513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.025520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.025722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.025729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.026143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.026149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.026567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.026574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.027004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.027011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.027419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.027426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.027811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.027818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.028201] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.028208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.028484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.028491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.028778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.028784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.029085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.029091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.029545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.243 [2024-07-15 21:05:29.029552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.243 qpair failed and we were unable to recover it. 00:29:25.243 [2024-07-15 21:05:29.029935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.029941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.030374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.030380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.030599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.030605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.031016] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.031022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.031443] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.031453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.031860] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.031867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.032085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.032091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.032502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.032509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.032772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.032778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.032998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.033004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.033392] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.033400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.033485] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.033492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.033682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.033689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.034139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.034533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.034540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.034933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.034939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.035324] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.035331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.035539] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.035546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.035970] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.035977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.036291] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.036298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.036549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.036555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.036939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.036945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.037332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.037338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.037774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.037780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.038221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.038228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.038448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.038455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.038805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.038811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.039213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.039220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.039602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.039609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.040036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.040043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.040440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.040447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.040665] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.040671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.041091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.041097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.041492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.041499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.041808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.041814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.042207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.042214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.042523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.042530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.042747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.042753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.043025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.043032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.043427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.043434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.043817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.043824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.044044] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.044050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.044353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.044360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.044680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.044686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.045086] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.045094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.045508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.045515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.045924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.045931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.046335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.046342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.046644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.046651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.047008] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.047015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.047472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.047479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.047869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.047876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.048313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.048319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.048707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.048714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.049021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.049028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.049417] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.049424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.049813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.049820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.050120] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.050130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.050517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.050523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.050912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.051330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.051357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.244 qpair failed and we were unable to recover it. 00:29:25.244 [2024-07-15 21:05:29.051859] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.244 [2024-07-15 21:05:29.051867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.052347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.052374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.052779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.052787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.053182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.053189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.053477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.053485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.053866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.053872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.054271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.054278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.054490] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.054497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.054884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.054890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.054956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.054965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.055343] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.055351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.055760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.055767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.055984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.055991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.056387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.056778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.056785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.057172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.057178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.057551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.057557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.057943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.057949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.058336] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.058343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.058622] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.058629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.059043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.059049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.059452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.059459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.059748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.059755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.060187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.060592] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.060599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.060812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.060818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.061190] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.061197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.061626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.061633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.061805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.061812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.062237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.062244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.062661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.062668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.062966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.062972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.063408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.063415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.063624] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.063631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.063967] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.063973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.064346] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.064352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.064660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.064666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.065067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.065075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.065293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.065299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.065711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.065718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.066031] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.066039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.066440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.066447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.066705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.066712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.066891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.066897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.067314] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.067321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.067736] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.067742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.067945] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.067952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.068355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.068363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.068747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.068753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.069056] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.069063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.069518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.069526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.069832] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.069839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.070244] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.070251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.070532] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.070538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.070936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.070943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.071339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.071347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.071791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.071798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.072194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.072201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.072598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.072604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.072998] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.073005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.073441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.073448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.073840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.073847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.074107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.074116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.245 [2024-07-15 21:05:29.074599] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.245 [2024-07-15 21:05:29.074609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.245 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.075035] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.075042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.075512] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.075539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.075943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.075951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.076161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.076171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.076365] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.076373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.076741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.076748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.077344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.077372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.077812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.077820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.078210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.078218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.078627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.078634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.078942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.078948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.079215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.079223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.079650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.079656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.080046] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.080054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.080461] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.080469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.080872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.080879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.081278] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.081285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.081749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.081755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.082143] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.082150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.082563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.082570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.082974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.082981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.083385] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.083393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.083822] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.083830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.084362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.084390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.084793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.084801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.085198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.085206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.085638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.085645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.085953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.085960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.086161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.086168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.086586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.086593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.086827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.086833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.087026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.087412] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.087419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.087807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.087814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.088199] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.088205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.088276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.088282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.088654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.088660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.088734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.088949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.089348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.089357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.089750] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.089757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.090153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.090160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.090567] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.090574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.090981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.090988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.091387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.091394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.091827] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.091833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.092103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.092109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.092517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.092524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.092918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.092925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.093429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.093456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.093868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.093876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.094393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.094421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.094828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.094837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.095226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.095234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.095627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.095634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.095898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.246 [2024-07-15 21:05:29.095905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.246 qpair failed and we were unable to recover it. 00:29:25.246 [2024-07-15 21:05:29.096296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.096303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.096609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.096616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.097041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.097047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.097448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.097455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.097869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.098192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.098199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.098644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.098650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.099081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.099088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.099493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.099501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.099917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.099924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.100455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.100483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.100891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.100900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.101401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.101429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.101904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.101912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.102109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.102118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.102544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.102551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.102771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.102777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.103072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.103079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.103387] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.103395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.103761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.103769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.104183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.104191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.104466] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.104473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.104878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.104884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.105275] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.105285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.105486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.105494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.105892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.105899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.106211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.106218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.106548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.106555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.106957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.106964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.107349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.107356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.107741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.107748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.108142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.108149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.108377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.108383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.108722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.109117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.109128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.109513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.109520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.109914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.109921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.110251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.110258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.110676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.110684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.110912] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.110919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.111202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.111209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.111614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.111621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.111823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.111830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.112021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.112027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.112484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.112491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.112868] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.112874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.113262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.113268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.113474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.113480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.113846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.113853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.114240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.114246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.114640] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.114646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.115055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.115328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.115335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.115756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.115762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.116152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.116159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.116259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.116266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.116646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.116652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.117039] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.117046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.117436] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.117444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.117765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.117772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.247 qpair failed and we were unable to recover it. 00:29:25.247 [2024-07-15 21:05:29.118181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.247 [2024-07-15 21:05:29.118188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.118553] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.118560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.118943] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.118950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.119271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.119281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.119670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.119677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.119946] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.119953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.120377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.120384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.248 [2024-07-15 21:05:29.120782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.248 [2024-07-15 21:05:29.120789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.248 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.121183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.121192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.121616] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.121622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.122034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.122041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.122450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.122457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.122853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.122860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.123259] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.123267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.123666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.123674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.124089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.124096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.124527] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.124535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.124830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.124837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.125226] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.125232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.125634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.125640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.126126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.126133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.126409] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.126415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.126817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.126824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.127132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.127139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.127543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.127549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.128017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.128023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.128215] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.128629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.128636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.129022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.129029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.129345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.129352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.129757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.129764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.130161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.130168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.130590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.130596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.130985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.130992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.131208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.131215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.131574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.131581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.131966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.131973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.132361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.132368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.132756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.528 [2024-07-15 21:05:29.132763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.528 qpair failed and we were unable to recover it. 00:29:25.528 [2024-07-15 21:05:29.133059] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.133066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.133476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.133483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.133867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.133874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.134261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.134267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.134657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.134666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.135049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.135057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.135374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.135381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.135785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.135791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.135986] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.135994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.136200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.136207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.136470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.136477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.136865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.137087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.137094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.137500] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.137507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.137805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.138098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.138105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.138540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.138547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.138957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.138964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.139326] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.139353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.139839] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.139847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.140353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.140380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.140781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.140790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.141179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.141187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.141581] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.141587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.141978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.141985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.142382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.142795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.142802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.143208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.143215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.143602] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.143608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.143992] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.143999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.144230] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.144237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.144633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.144643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.145036] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.145043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.145457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.145464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.145895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.145901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.146170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.146178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.146384] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.146390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.146584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.146595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.146937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.529 [2024-07-15 21:05:29.146944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.529 qpair failed and we were unable to recover it. 00:29:25.529 [2024-07-15 21:05:29.147165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.147172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.147540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.147546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.147928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.147934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.148322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.148329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.148549] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.148555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.148908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.148914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.149183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.149190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.149685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.149692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.150092] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.150099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.150477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.150484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.150873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.150879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.151189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.151196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.151407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.151415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.151816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.151822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.152246] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.152253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.152676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.152682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.153111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.153117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.153350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.153356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.153738] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.153745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.154128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.154135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.154521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.154935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.154942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.155250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.155257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.155644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.155650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.156038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.156045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.156513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.156519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.156705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.157085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.157091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.157478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.157485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.157869] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.157876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.158261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.158267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.158590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.158596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.159025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.159033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.159381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.159388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.159773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.159780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.160162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.160170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.160489] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.160495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.160878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.160885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.161146] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.161153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.161338] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.161346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.530 qpair failed and we were unable to recover it. 00:29:25.530 [2024-07-15 21:05:29.161757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.530 [2024-07-15 21:05:29.161764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.162151] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.162158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.162478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.162485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.162892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.162898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.163286] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.163293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.163506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.163513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.163924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.163931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.164130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.164138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.164551] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.164557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.164821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.164828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.165212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.165218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.165647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.165654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.166047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.166053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.166451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.166458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.166845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.166851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.167237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.167244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.167627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.167633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.168013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.168020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.168411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.168418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.168696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.168703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.169155] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.169162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.169430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.169437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.169653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.169660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.170061] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.170067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.170266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.170273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.170746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.170752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.171029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.171036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.171297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.171304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.171701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.171708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.172115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.172124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.172332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.172340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.172719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.172725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.173034] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.173043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.173405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.173412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.173798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.173804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.174192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.174199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.174673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.174680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.174892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.174898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.175175] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.175183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.175399] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.175407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.175636] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.531 [2024-07-15 21:05:29.175643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.531 qpair failed and we were unable to recover it. 00:29:25.531 [2024-07-15 21:05:29.176067] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.176074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.176473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.176480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.176705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.176711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.177117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.177511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.177518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.177904] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.177911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.177981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.178353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.178360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.178630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.178637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.179026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.179033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.179240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.179247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.179626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.179632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.180017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.180024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.180432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.180439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.180701] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.180708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.181140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.181147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.181444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.181450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.181886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.182083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.182435] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.182442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.182836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.182843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.183037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.183044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.183394] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.183401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.183830] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.183836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.184221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.184228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.184421] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.184435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.184893] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.184900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.185108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.185114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.185534] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.185848] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.185855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.186263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.186270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.186691] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.186700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.187103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.187495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.532 [2024-07-15 21:05:29.187502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.532 qpair failed and we were unable to recover it. 00:29:25.532 [2024-07-15 21:05:29.187721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.187727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.188127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.188134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.188546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.188553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.188667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.188674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.188963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.188969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.189354] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.189362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.189556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.189562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.189977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.189984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.190306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.190312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.190720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.190727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.191109] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.191116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.191505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.191511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.191981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.191988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.192510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.192537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.193002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.193320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.193328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.193533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.193542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.193761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.193768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.193975] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.193981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.194261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.194269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.194720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.194727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.195033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.195039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.195335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.195342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.195563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.195570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.196015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.196022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.196430] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.196437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.196633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.196641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.197060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.197066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.197462] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.197469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.197856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.197863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.198247] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.198254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.198660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.198666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.199053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.199060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.199267] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.199275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.199718] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.199726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.200135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.200142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.200526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.200533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.200927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.200936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.201145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.201152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.201344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.201351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.201747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.533 [2024-07-15 21:05:29.201753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.533 qpair failed and we were unable to recover it. 00:29:25.533 [2024-07-15 21:05:29.202140] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.202147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.202530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.202536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.202854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.202860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.203130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.203138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.203330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.203337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.203642] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.203649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.204048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.204055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.204458] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.204464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.204843] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.204849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.205233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.205240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.205685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.205692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.206127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.206134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.206535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.206542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.206949] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.206956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.207273] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.207280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.207710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.207716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.208107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.208113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.208375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.208382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.208768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.208775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.209180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.209187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.209570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.209577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.209647] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.209652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.210030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.210036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.210432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.210439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.210824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.210830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.211260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.211267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.211698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.211705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.212025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.212031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.212239] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.212246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.212652] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.212658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.213041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.213047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.213248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.213255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.213679] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.213685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.214069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.214076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.214482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.214488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.214685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.214692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.215037] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.215046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.215465] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.215472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.215785] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.215792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.216178] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.216185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.216619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.534 [2024-07-15 21:05:29.216626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.534 qpair failed and we were unable to recover it. 00:29:25.534 [2024-07-15 21:05:29.217010] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.217017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.217423] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.217430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.217540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.217547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.217960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.217966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.218353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.218359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.218619] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.218625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.218846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.218852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.219213] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.219219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.219558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.219564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.219995] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.220002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.220405] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.220412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.220625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.220632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.220919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.220926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.221269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.221276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.221700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.221707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.222150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.222156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.222475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.222481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.222898] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.222904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.223287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.223294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.223708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.223714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.224096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.224103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.224506] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.224512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.224782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.224788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.224971] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.224979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.225382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.225389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.225779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.225786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.226051] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.226058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.226329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.226336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.226727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.226733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.227139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.227145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.227518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.227525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.227913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.227920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.228320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.228327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.228728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.228735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.228937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.228944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.229149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.229158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.229611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.229617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.230026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.230032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.230293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.230300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.230705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.230711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.231095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.535 [2024-07-15 21:05:29.231102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.535 qpair failed and we were unable to recover it. 00:29:25.535 [2024-07-15 21:05:29.231491] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.231498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.231878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.231884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.232150] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.232157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.232547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.232553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.232940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.232946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.233165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.233172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.233630] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.233636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.234023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.234030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.234474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.234481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.234867] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.234873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.235138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.235144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.235470] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.235477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.235748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.235755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.236047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.236054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.236347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.236353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.236771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.236777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.237002] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.237008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.237375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.237382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.237689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.237696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.237925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.237931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.238415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.238422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.238812] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.238819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.239024] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.239032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.239263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.239271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.239722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.239728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.240115] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.240127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.240513] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.240519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.240905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.240911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.241130] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.241137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.241508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.241514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.241900] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.241907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.242293] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.242299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.242610] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.242617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.243033] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.536 [2024-07-15 21:05:29.243228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.536 [2024-07-15 21:05:29.243243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.536 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.243520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.243526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.243956] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.243962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.244347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.244354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.244740] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.244748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.245166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.245173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.245235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.245241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.245695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.245702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.246112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.246118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.246322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.246328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.246714] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.246720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.247147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.247154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.247559] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.247566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.247948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.247955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.248341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.248348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.248654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.248660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.248862] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.248869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.249125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.249133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.249546] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.249553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.249937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.249944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.250260] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.250267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.250664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.250670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.251064] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.251071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.251484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.251491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.251695] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.251702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.252112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.252118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.252321] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.252327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.252756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.252764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.252974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.252981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.253256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.253264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.253666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.253672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.254060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.254067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.254509] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.254516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.254603] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.254609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.254978] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.254985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.255298] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.255305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.255778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.255784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.256171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.256177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.256591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.256598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.256980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.256987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.257374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.537 [2024-07-15 21:05:29.257383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.537 qpair failed and we were unable to recover it. 00:29:25.537 [2024-07-15 21:05:29.257722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.257728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.257955] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.258383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.258390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.258794] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.258801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.259108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.259115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.259518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.259525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.259923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.259930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.260295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.260323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.260766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.260774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.261040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.261048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.261364] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.261371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.261763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.261769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.262153] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.262160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.262556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.262564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.262996] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.263004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.263429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.263435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.263817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.263824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.264323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.264351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.264756] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.264764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.264954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.264962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.265274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.265281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.265488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.265497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.265911] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.265918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.266126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.266133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.266533] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.266539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.266922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.266929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.267368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.267395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.267798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.267806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.268223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.268231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.268660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.268668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.269072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.269078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.269479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.269486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.269747] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.269755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.270137] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.270144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.270318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.270325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.270768] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.270775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.271172] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.271179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.271566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.271573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.271961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.271968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.272281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.538 [2024-07-15 21:05:29.272291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.538 qpair failed and we were unable to recover it. 00:29:25.538 [2024-07-15 21:05:29.272508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.272515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.272926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.272932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.273319] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.273326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.273713] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.273720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.274164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.274170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.274479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.274885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.274892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.275294] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.275301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.275685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.275691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.276084] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.276090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.276477] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.276484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.276866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.276872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.277111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.277117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.277363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.277371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.277804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.277810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.278197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.278203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.278591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.278597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.278988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.278994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.279380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.279387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.279776] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.279783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.280167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.280173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.280574] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.280580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.280980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.280987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.281289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.281296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.281715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.281722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.282114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.282120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.282382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.282389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.282611] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.282618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.283029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.283035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.283448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.283455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.283866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.283872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.284262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.284268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.284487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.284493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.284905] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.284911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.285117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.285128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.285569] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.285576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.285851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.285858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.286266] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.286272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.286473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.286482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.286805] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.286813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.539 [2024-07-15 21:05:29.287219] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.539 [2024-07-15 21:05:29.287226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.539 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.287638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.287644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.288032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.288038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.288448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.288454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.288882] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.289357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.289364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.289765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.289772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.290166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.290173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.290577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.290585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.291018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.291025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.291237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.291244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.291617] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.291623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.292019] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.292025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.292444] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.292451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.292837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.292844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.293048] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.293054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.293256] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.293263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.293655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.293662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.294088] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.294094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.294359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.294365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.294751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.294757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.295185] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.295192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.295625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.295631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.295834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.295843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.296069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.296076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.296507] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.296514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.296775] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.296781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.297188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.297196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.297414] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.297420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.297773] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.297780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.298180] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.298186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.298655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.298661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.298951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.298958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.299340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.299347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.299789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.299796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.300174] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.300542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.300548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.300937] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.300943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.301328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.301336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.301643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.540 [2024-07-15 21:05:29.301652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.540 qpair failed and we were unable to recover it. 00:29:25.540 [2024-07-15 21:05:29.301846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.301854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.302159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.302166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.302536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.302542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.302803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.302810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.303197] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.303203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.303418] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.303426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.303877] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.303883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.304281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.304288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.304516] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.304523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.304924] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.304931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.305318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.305324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.305717] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.305724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.306032] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.306039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.306429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.306436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.306638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.306645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.306840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.306846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.307276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.307283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.307496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.307504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.307828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.307835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.308136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.308143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.308274] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.308281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.308661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.308667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.309050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.309057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.309452] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.309459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.309660] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.309666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.310085] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.310092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.310300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.310309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.310734] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.310741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.311055] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.311062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.311474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.311481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.311685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.311692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.312017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.312132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.312139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.312432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.312439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.312906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.541 [2024-07-15 21:05:29.312912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.541 qpair failed and we were unable to recover it. 00:29:25.541 [2024-07-15 21:05:29.313300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.313307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.313710] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.313716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.314194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.314200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.314407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.314413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.314799] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.314808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.315228] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.315235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.315627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.315633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.315836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.315844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.316261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.316268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.316666] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.316672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.317134] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.317141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.317518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.317524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.317915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.317921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.318313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.318320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.318614] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.318621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.319025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.319032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.319504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.319512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.319826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.319833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.320240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.320247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.320457] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.320465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.320884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.320891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.321330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.321338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.321766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.321773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.322179] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.322186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.322606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.322612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.322838] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.322845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.323241] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.323248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.323505] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.323511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.323793] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.323800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.323887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.323893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.324268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.324275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.324662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.324669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.324930] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.324937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.325191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.325199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.325626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.325632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.326017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.326023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.326437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.326444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.326831] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.326837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.327099] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.327105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.327510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.542 [2024-07-15 21:05:29.327517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.542 qpair failed and we were unable to recover it. 00:29:25.542 [2024-07-15 21:05:29.327919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.327925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.328330] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.328338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.328742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.328750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.329017] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.329024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.329124] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.329131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.329586] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.329592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.329988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.329995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.330422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.330449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.330894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.330902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.331398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.331425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.331828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.331837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.332317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.332344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.332811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.332819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.333258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.333266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.333664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.333672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.334091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.334098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.334535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.334543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.334976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.334983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.335496] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.335524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.335801] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.335809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.336353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.336380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.336789] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.336798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.337007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.337014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.337304] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.337312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.337699] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.337705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.338009] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.338016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.338425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.338846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.338852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.339238] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.339246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.339650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.339657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.340043] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.340050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.340379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.340390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.340618] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.340625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.340781] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.340787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.341170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.341177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.341413] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.341420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.341804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.341811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.342073] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.342080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.342494] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.342501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.342889] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.342896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.543 [2024-07-15 21:05:29.343160] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.543 [2024-07-15 21:05:29.343168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.543 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.343366] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.343377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.343807] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.343814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.344077] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.344084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.344373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.344381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.344703] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.344710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.345114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.345121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.345382] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.345388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.345661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.345668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.346081] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.346088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.346563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.346570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.346823] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.346829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.347050] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.347057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.347464] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.347471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.347855] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.347863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.348269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.348276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.348692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.348698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.349091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.349098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.349503] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.349510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.349896] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.349904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.350331] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.350358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.350765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.350774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.351168] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.351176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.351590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.351598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.351813] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.351819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.352276] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.352283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.352654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.352661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.353078] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.353084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.353482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.353490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.353899] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.353906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.354327] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.354334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.354715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.354725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.355107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.355113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.355522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.355530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.355925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.355933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.356353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.356380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.356782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.356790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.357348] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.357784] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.357792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.357994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.358003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.358207] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.544 [2024-07-15 21:05:29.358215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.544 qpair failed and we were unable to recover it. 00:29:25.544 [2024-07-15 21:05:29.358597] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.358603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.358994] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.359001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.359455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.359461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.359878] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.359885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.360104] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.360112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.360504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.360511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.360895] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.360902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.361347] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.361375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.361811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.361820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.362349] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.362377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.362648] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.362658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.363096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.363526] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.363534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.363962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.363969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.364182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.364198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.364579] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.364586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.364708] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.364715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.365149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.365156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.365641] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.365647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.365954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.365962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.366415] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.366422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.366810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.366817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.367210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.367217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.367388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.367396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.367724] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.367731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.368135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.368142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.368523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.368530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.368720] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.368730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.369144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.369152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.369566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.369573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.369861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.369869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.370277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.370284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.370657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.370664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.370925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.370932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.371378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.371385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.545 qpair failed and we were unable to recover it. 00:29:25.545 [2024-07-15 21:05:29.371779] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.545 [2024-07-15 21:05:29.371786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.372176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.372183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.372613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.372620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.373005] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.373011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.373411] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.373420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.373681] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.373689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.374094] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.374100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.374498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.374505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.374894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.374902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.375307] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.375314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.375765] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.376164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.376171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.376563] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.376570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.376954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.376960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.377045] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.377051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.377235] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.377242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.377498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.377505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.377894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.377901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.378287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.378294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.378502] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.378509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.378903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.378909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.379198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.379205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.379591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.379598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.379817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.379823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.380222] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.380229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.380646] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.380655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.381060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.381068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.381514] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.381521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.381920] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.381927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.382139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.382146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.382508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.382515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.382903] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.382909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.383313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.383321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.383707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.383714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.384098] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.384105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.384425] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.384434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.384824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.384831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.385224] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.385232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.385638] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.385646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.386129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.386137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.546 [2024-07-15 21:05:29.386317] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.546 [2024-07-15 21:05:29.386325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.546 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.386798] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.386805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.387214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.387222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.387653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.387659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.388049] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.388056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.388455] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.388462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.388727] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.388733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.389119] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.389139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.389427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.389434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.389908] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.389916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.390303] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.390700] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.390708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.391169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.391176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.391378] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.391385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.391767] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.391773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.392258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.392264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.392659] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.392666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.392926] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.392933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.393318] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.393325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.393726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.393733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.394128] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.394135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.394524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.394531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.394917] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.394924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.395406] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.395433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.395650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.395660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.396042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.396050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.396270] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.396278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.396587] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.396594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.397026] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.397034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.397440] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.397449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.397852] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.397860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.398290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.398297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.398774] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.398781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.399041] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.399049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.399258] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.399267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.399692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.399704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.399965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.399972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.400360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.400367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.547 [2024-07-15 21:05:29.400751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.547 [2024-07-15 21:05:29.400758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.547 qpair failed and we were unable to recover it. 00:29:25.830 [2024-07-15 21:05:29.401152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.830 [2024-07-15 21:05:29.401161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.830 qpair failed and we were unable to recover it. 00:29:25.830 [2024-07-15 21:05:29.401478] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.830 [2024-07-15 21:05:29.401486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.830 qpair failed and we were unable to recover it. 00:29:25.830 [2024-07-15 21:05:29.401964] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.830 [2024-07-15 21:05:29.401972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.830 qpair failed and we were unable to recover it. 00:29:25.830 [2024-07-15 21:05:29.402449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.402456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.402858] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.402866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.403255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.403263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.403733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.403969] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.403976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.404456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.404463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.404633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.404642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.405021] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.405028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.405441] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.405447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.405834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.405841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.406269] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.406276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.406664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.406671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.406873] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.406879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.407271] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.407278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.407664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.407671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.408058] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.408065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.408542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.408550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.408947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.408954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.409342] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.409349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.409764] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.409771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.409972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.409979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.410428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.410434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.410629] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.410637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.411052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.411058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.411463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.411470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.411886] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.411893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.412296] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.412303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.412711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.412718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.413138] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.413145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.413345] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.413358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.413557] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.413564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.413932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.413939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.414144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.414150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.414448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.831 [2024-07-15 21:05:29.414458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.831 qpair failed and we were unable to recover it. 00:29:25.831 [2024-07-15 21:05:29.414663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.414670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.414881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.414888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.415279] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.415287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.415518] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.415961] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.415967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.416221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.416229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.416644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.416652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.416914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.416921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.417313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.417320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.417605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.417612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.417936] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.417942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.418257] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.418264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.418467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.418475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.418866] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.418874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.419283] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.419291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.419704] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.419711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.419958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.419965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.420183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.420190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.420575] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.420582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.420811] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.420818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.421020] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.421027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.421113] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.421119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.421528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.421535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.421927] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.421935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.422376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.422383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.422777] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.422784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.423188] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.423202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.423584] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.832 [2024-07-15 21:05:29.423591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.832 qpair failed and we were unable to recover it. 00:29:25.832 [2024-07-15 21:05:29.423856] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.423863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.424251] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.424258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.424705] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.424712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.425091] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.425097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.425596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.425603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.425676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.425682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.426075] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.426081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.426482] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.426489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.426865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.426871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.427135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.427141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.427535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.427542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.427851] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.427859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.428262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.428270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.428662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.428669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.429103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.429111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.429287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.429295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.429658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.429665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.429948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.429955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.430337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.430344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.430744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.430750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.430953] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.430961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.431362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.431369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.431766] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.431772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.833 [2024-07-15 21:05:29.432166] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.833 [2024-07-15 21:05:29.432173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.833 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.432571] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.432578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.432963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.432970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.433355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.433362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.433749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.433755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.434147] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.434154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.434433] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.434441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.434728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.434736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.435018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.435112] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.435119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.435522] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.435528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.435972] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.435979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.436171] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.436181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.436488] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.436495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.436932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.436939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.437377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.437385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.437769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.437775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.438162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.438168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.438388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.438394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.438745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.438751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.439163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.439170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.439609] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.439616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.440047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.440053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.440456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.440463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.834 [2024-07-15 21:05:29.440925] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.834 [2024-07-15 21:05:29.440932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.834 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.441233] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.441240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.441552] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.441558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.441939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.441946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.442333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.442342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.442730] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.442737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.443129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.443137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.443558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.443977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.443983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.444432] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.444460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.444863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.444871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.445079] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.445089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.445313] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.445321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.445644] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.445651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.445916] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.445922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.446332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.446339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.446819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.446827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.447262] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.447270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.447690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.447698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.448118] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.448130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.448329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.448337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.448769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.448776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.448989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.448996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.449401] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.449409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.835 [2024-07-15 21:05:29.449707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.835 [2024-07-15 21:05:29.449715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.835 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.450105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.450111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.450498] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.450505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.450891] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.450898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.451341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.451368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.451826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.451834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.452337] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.452365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.452590] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.452599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.452800] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.452809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.453320] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.453328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.453593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.453600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.454000] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.454008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.454212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.454220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.454537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.454544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.454778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.454784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.455184] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.455191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.455408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.455414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.455819] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.455827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.456299] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.456307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.456689] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.456696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.456960] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.456969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.457361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.457370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.457664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.457672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.457880] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.457890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.836 qpair failed and we were unable to recover it. 00:29:25.836 [2024-07-15 21:05:29.458096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.836 [2024-07-15 21:05:29.458104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.458308] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.458315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.458821] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.458829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.459217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.459224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.459495] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.459502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.459881] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.459889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.459974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.459980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.460350] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.460358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.460741] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.460748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.461129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.461137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.461344] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.461352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.461627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.461634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.461981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.461988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.462380] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.462388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.462615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.462622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.462824] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.462831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.463198] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.463206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.463521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.837 [2024-07-15 21:05:29.463529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.837 qpair failed and we were unable to recover it. 00:29:25.837 [2024-07-15 21:05:29.463915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.463922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.464362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.464370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.464568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.464577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.464780] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.464787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.465097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.465300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.465309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.465792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.465799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.466211] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.466219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.466537] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.466544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.466935] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.466942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.467329] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.467336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.467721] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.467728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.468161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.468169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.468570] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.468577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.468966] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.468972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.469236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.469243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.469658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.469665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.469952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.469958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.470361] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.470372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.470676] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.470684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.470938] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.470945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.471407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.471415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.471607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.471616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.472030] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.472037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.838 [2024-07-15 21:05:29.472439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.838 [2024-07-15 21:05:29.472447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.838 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.472872] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.472879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.473263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.473271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.473481] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.473488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.473795] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.473802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.474204] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.474212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.474643] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.474651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.474722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.474728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.475117] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.475128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.475524] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.475532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.475808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.475816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.476186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.476193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.476588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.476595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.477022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.477030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.477467] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.477474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.477875] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.477882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.478108] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.478486] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.478493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.478892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.478899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.479214] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.479222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.479632] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.479640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.480012] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.480020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.480311] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.480318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.480709] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.480715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.480984] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.480991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.481196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.481205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.481627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.839 [2024-07-15 21:05:29.481635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.839 qpair failed and we were unable to recover it. 00:29:25.839 [2024-07-15 21:05:29.481940] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.481947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.482388] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.482396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.482803] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.482811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.483221] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.483228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.483530] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.483538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.483922] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.483929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.484248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.484255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.484653] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.484662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.485052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.485059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.485456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.485464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.485890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.485898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.485963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.485971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.486162] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.486170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.486558] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.486565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.486861] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.486869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.487234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.487241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.487631] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.487639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.487841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.487849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.488240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.488249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.488661] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.488669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.488976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.488984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.489372] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.489380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.489667] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.489674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.490053] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.490061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.840 qpair failed and we were unable to recover it. 00:29:25.840 [2024-07-15 21:05:29.490450] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.840 [2024-07-15 21:05:29.490458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.490888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.490895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.491110] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.491118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.491548] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.491555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.491826] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.491834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.492243] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.492251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.492656] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.492663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.493047] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.493053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.493451] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.493458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.493844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.493852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.494281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.494289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.494686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.494693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.495132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.495140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.495531] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.495538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.495963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.495971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.496368] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.496376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.496763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.496770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.497161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.497168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.497393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.497400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.497698] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.497705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.497923] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.497930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.498341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.498349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.498742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.498749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.499149] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.499157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.499568] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.499575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.499847] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.499854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.500164] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.500171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.500473] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.500480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.841 [2024-07-15 21:05:29.500874] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.841 [2024-07-15 21:05:29.500881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.841 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.501284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.501291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.501680] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.501687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.501913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.501920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.502300] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.502307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.502719] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.502726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.503169] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.503176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.503598] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.503604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.503988] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.503994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.504475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.504483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.504834] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.504840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.505225] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.505231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.505673] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.505679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.506107] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.506114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.506497] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.506503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.506892] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.506900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.507306] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.507333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.507536] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.507545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.507974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.507983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.508290] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.508711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.508718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.509200] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.509208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.509627] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.509634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.509837] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.509845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.510268] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.510276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.510577] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.510584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.510979] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.510986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.511295] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.511725] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.511732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.512029] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.512036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.512429] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.512436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.512829] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.512836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.513234] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.513241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.513521] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.513997] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.842 [2024-07-15 21:05:29.514003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.842 qpair failed and we were unable to recover it. 00:29:25.842 [2024-07-15 21:05:29.514277] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.514284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.514782] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.514789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.515176] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.515183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.515479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.515486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.515894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.515900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.516194] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.516201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.516620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.516628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.517052] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.517263] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.517270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.517538] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.517545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.517977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.517983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.518383] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.518390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.518696] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.518704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.519125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.519133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.519550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.519558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.519650] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.519657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.520095] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.520101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.520322] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.520329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.520615] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.520622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.521018] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.521026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.521439] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.521446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.521751] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.521757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.522136] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.522144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.522523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.522529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.522928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.523250] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.523258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.523672] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.523678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.523958] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.523967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.524261] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.524269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.843 [2024-07-15 21:05:29.524662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.843 [2024-07-15 21:05:29.524670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.843 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.524890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.524898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.525325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.525333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.525732] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.525740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.526157] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.526164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.526607] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.526614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.527022] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.527028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.527459] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.527465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.527674] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.527680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.528097] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.528103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.528487] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.528494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.528884] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.528890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.529297] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.529306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.529711] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.529718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.530132] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.530379] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.530472] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.530478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.530649] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.530655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.530954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.530961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.531381] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.531388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.531788] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.531794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.532181] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.532188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.532334] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.532341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.532748] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.532756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.533144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.533152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.533227] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.533233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.533612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.533619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.533690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.533695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.534087] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.534094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.534555] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.534563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.534977] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.534985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 [2024-07-15 21:05:29.535333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.535341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.844 [2024-07-15 21:05:29.535550] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.535557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:25.844 [2024-07-15 21:05:29.535962] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.844 [2024-07-15 21:05:29.535971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.844 qpair failed and we were unable to recover it. 00:29:25.844 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.844 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.844 [2024-07-15 21:05:29.536369] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.536377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.845 [2024-07-15 21:05:29.536769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.536777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.537167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.537176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.537565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.537572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.537770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.537781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.538196] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.538204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.538626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.538633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.538933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.539360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.539368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.539722] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.539729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.540116] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.540130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.540519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.540526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.540913] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.540922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.541193] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.541201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.541591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.541809] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.541816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.542254] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.542261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.542542] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.542549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.542973] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.542980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.543163] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.543171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.543360] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.543368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.543778] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.543785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.544217] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.544225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.544556] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.544564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.544758] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.544766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.544981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.544988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.545370] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.545377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.545763] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.545770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.546158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.546165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.546371] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.546380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.546651] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.546658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.547071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.547079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.547165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.547172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.845 [2024-07-15 21:05:29.547572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.845 [2024-07-15 21:05:29.547580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.845 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.547989] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.547996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.548374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.548382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.548771] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.548778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.549170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.549178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.549572] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.549579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.549770] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.549778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.550158] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.550165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.550544] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.550550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.550939] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.550948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.551359] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.551368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.551677] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.551685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.552114] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.552126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.552510] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.552516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.552906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.552912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.553333] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.553341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.553655] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.553662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.554089] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.554096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.554493] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.554500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.554906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.554913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.555144] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.555157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.555410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.555425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.555816] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.555823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.556223] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.556230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.556554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.846 [2024-07-15 21:05:29.556563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.846 qpair failed and we were unable to recover it. 00:29:25.846 [2024-07-15 21:05:29.556954] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.556961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.557167] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.557175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.557588] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.557595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.557664] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.557670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.557888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.557895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.558284] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.558291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.558601] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.558608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.558987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.558995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.559393] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.559401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.559808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.559816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.560242] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.560248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.560475] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.560482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.560865] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.560871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.561072] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.561080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.561474] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.561481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.561744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.561750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.562139] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.562147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.562523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.562531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.562915] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.562923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.563189] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.563196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.563620] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.563627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.564025] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.564032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.564183] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.564191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.847 [2024-07-15 21:05:29.564625] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.847 [2024-07-15 21:05:29.564631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.847 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.564933] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.564941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.565165] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.565173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.565511] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.565519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.565932] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.565938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.566332] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.566340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.566746] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.567023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.567031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.567428] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.567436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.567637] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.567646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.567959] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.567966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.568376] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.568383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.568690] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.568697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.569103] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.569111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.569515] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.569522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.569742] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.569749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.570152] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.570160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.570562] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.570570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.570974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.570981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.571248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.571256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.571707] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.571713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.572102] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.572109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.572499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.572507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.572902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.572909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.573410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.848 [2024-07-15 21:05:29.573437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.848 qpair failed and we were unable to recover it. 00:29:25.848 [2024-07-15 21:05:29.573901] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.573910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.574408] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.574436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.574844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.574853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.849 [2024-07-15 21:05:29.575407] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.575436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.849 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.849 [2024-07-15 21:05:29.575841] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.575851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.849 [2024-07-15 21:05:29.576248] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.576257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.576545] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.576553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.576815] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.576822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.577208] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.577216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.577504] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.577511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.577941] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.577947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.578237] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.578245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.578535] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.578542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.578951] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.578958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.579187] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.579193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.579554] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.579561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.579963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.579970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.849 qpair failed and we were unable to recover it. 00:29:25.849 [2024-07-15 21:05:29.580375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.849 [2024-07-15 21:05:29.580383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.580769] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.580776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.580952] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.580960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.581357] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.581364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.581791] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.581799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.582004] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.582011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.582212] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.582222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.582670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.582677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.583071] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.583078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.583528] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.583535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.583947] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.583954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.584353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.584361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.584463] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.584469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.584844] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.584851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.585126] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.585134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.585310] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.585317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.585749] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.585755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.586066] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.586073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.586287] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.586294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.586836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.586843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.587101] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.587108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.587422] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.587429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.587817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.587824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.850 qpair failed and we were unable to recover it. 00:29:25.850 [2024-07-15 21:05:29.588229] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.850 [2024-07-15 21:05:29.588237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.588654] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.588662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.588948] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.588955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.589142] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.589149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.589374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.589380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.589783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.589789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.590186] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.590194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 Malloc0 00:29:25.851 [2024-07-15 21:05:29.590591] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.590599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.590987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.590993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.591210] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.591217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.851 [2024-07-15 21:05:29.591634] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.591641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:25.851 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.851 [2024-07-15 21:05:29.592040] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.592047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.851 [2024-07-15 21:05:29.592453] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.592460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.592864] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.592871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.593206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.593213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.593670] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.593676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.593894] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.593900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.594312] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.594319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.594728] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.594735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.595129] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.595136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.595236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.595245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.595483] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.851 [2024-07-15 21:05:29.595489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.851 qpair failed and we were unable to recover it. 00:29:25.851 [2024-07-15 21:05:29.595853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.595859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.596236] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.596243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.596663] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.596670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.596887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.596895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.597281] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.597290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.597685] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.597692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.597958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.852 [2024-07-15 21:05:29.598082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.598088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.598492] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.598500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.598885] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.598892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.599289] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.599296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.599692] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.599698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.599914] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.599920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.600206] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.600212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.600593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.600600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.601015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.601437] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.601444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.601836] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.601843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.602145] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.602154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.602547] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.602555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.602981] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.602989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.603325] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.603332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.603804] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.603810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.604135] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.604142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.604596] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.852 [2024-07-15 21:05:29.604603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.852 qpair failed and we were unable to recover it. 00:29:25.852 [2024-07-15 21:05:29.605013] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.605020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.605420] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.605427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.605743] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.605750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.606159] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.606167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.606565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.606572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.853 [2024-07-15 21:05:29.606965] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.606972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.853 [2024-07-15 21:05:29.607374] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.607382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.853 [2024-07-15 21:05:29.607593] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.607600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.853 [2024-07-15 21:05:29.607863] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.607871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.608328] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.608336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.608726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.608732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.609023] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.609029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.609456] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.609463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.609686] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.609693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.609828] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.609835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.610127] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.610133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.610523] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.610530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.610919] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.610925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.611335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.611344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.611565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.611572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.611980] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.611988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.612398] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.612792] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.853 [2024-07-15 21:05:29.612798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.853 qpair failed and we were unable to recover it. 00:29:25.853 [2024-07-15 21:05:29.613111] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.613119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.613540] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.613547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.613976] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.613983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.614335] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.614363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.614846] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.614855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.615069] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.615076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.615543] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.615551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.615942] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.615949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.616484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.616512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.616918] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.616926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.617341] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.617368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.617845] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.617853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.618362] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.618389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.618772] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.618780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.854 [2024-07-15 21:05:29.619182] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.619190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.854 [2024-07-15 21:05:29.619606] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.619614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.854 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.854 [2024-07-15 21:05:29.620038] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.620046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.620484] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.620491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.620890] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.620897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.621377] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.621384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.621682] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.621689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.622083] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.622090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.622305] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.622312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.622626] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.622632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.622902] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.622908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.623340] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.623348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.623745] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.623752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.623957] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.854 [2024-07-15 21:05:29.623967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.854 qpair failed and we were unable to recover it. 00:29:25.854 [2024-07-15 21:05:29.624042] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.624048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.624353] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.624359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.624744] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.624751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.625125] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.625132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.625508] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.625514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.625906] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.625913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.626339] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.626347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.626737] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.626744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.627001] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.627008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.627438] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.627445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.627835] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.627843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.628240] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.628247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.628355] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.628361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.628783] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.628789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.628985] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.628991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.629375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.629382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.629786] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.629794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.629987] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.629994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.630410] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.630417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.630808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.630815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.855 [2024-07-15 21:05:29.631202] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.631216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.855 [2024-07-15 21:05:29.631662] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.631669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.855 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.855 [2024-07-15 21:05:29.632060] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.632067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.632323] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.632331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.632678] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.632685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.633096] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.633104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.633519] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.633526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.634006] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.634013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.634427] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.634434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.634633] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.634639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.635082] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.855 [2024-07-15 21:05:29.635090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.855 qpair failed and we were unable to recover it. 00:29:25.855 [2024-07-15 21:05:29.635499] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.635506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.635726] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.635732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.635963] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.635970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.636449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.636457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.636658] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.636667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.636888] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.636894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.637367] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.637374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.637760] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.637766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.638154] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.856 [2024-07-15 21:05:29.638162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f045c000b90 with addr=10.0.0.2, port=4420 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.638219] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.856 [2024-07-15 21:05:29.648829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.648914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.648929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.648938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.648942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.648957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.856 21:05:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1772202 00:29:25.856 [2024-07-15 21:05:29.658671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.658792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.658806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.658812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.658818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.658832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.668763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.668840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.668853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.668858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.668862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.668874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.678720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.678802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.678814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.678819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.678824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.678835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.688848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.688929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.688942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.688947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.688951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.688965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.698767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 21:05:29.698839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.856 [2024-07-15 21:05:29.698851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.856 [2024-07-15 21:05:29.698857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.856 [2024-07-15 21:05:29.698861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.856 [2024-07-15 21:05:29.698872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.856 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 21:05:29.708806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 21:05:29.708875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 21:05:29.708888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 21:05:29.708893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 21:05:29.708897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:25.857 [2024-07-15 21:05:29.708908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.718780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.718852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.718865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.718870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.718875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.718886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.728839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.728916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.728929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.728934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.728938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.728949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.738850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.738923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.738935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.738940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.738944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.738955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.748895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.748965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.748978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.748983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.748987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.748998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.758927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.759003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.759022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.759028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.759033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.759047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.768984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.769056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.769070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.769075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.769080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.769091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.778999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.779067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.779079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.779084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.779092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.779103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.789007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.789075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.789087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.789092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.789096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.789107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.799038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.799111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.799128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.799134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.799138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.799150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.809073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.809148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.809161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.809166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.119 [2024-07-15 21:05:29.809170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.119 [2024-07-15 21:05:29.809181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.119 qpair failed and we were unable to recover it. 00:29:26.119 [2024-07-15 21:05:29.819111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.119 [2024-07-15 21:05:29.819183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.119 [2024-07-15 21:05:29.819196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.119 [2024-07-15 21:05:29.819201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.819205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.819216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.829133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.829210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.829223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.829228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.829232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.829243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.839143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.839225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.839238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.839243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.839248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.839259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.849190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.849266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.849279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.849284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.849288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.849299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.859256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.859341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.859353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.859358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.859362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.859373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.869272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.869339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.869352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.869360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.869364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.869375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.879280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.879356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.879368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.879373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.879378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.879389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.889301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.889378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.889390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.889395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.889400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.889410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.899312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.899381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.899393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.899398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.899402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.899413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.909377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.909446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.909458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.909464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.909468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.909479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.919382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.919567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.919580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.919585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.919590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.919601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.929496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.929613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.929625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.929630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.929634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.120 [2024-07-15 21:05:29.929645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 21:05:29.939493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 21:05:29.939586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 21:05:29.939598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 21:05:29.939603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 21:05:29.939608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.939619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.949511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.949580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.949593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.949599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.949603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.949614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.959465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.959532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.959545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.959554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.959558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.959569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.969514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.969592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.969604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.969609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.969614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.969624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.979515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.979586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.979598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.979603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.979607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.979618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.989550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.989650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.989663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.989668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.989672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.989683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:29.999597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:29.999668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:29.999680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:29.999686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:29.999690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:29.999701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 21:05:30.009669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 21:05:30.009745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 21:05:30.009759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 21:05:30.009764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 21:05:30.009769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.121 [2024-07-15 21:05:30.009780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.019651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.019724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.019743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.019749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.019754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.019768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.029586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.029693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.029716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.029726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.029733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.029754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.039694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.039765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.039780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.039786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.039790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.039803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.049711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.049783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.049800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.049805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.049810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.049822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.059760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.059829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.059842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.059851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.059855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.059867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.069661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.069737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.069752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.069758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.069763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.069775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.079799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.079868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.079880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.079885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.079890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.079901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.089877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.089981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.089993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.089999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.090003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.090018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 21:05:30.099731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 21:05:30.099800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 21:05:30.099812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 21:05:30.099817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 21:05:30.099822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.384 [2024-07-15 21:05:30.099833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.109892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.109960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.109974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.109979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.109983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.109996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.119930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.120006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.120025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.120031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.120036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.120050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.129968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.130075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.130089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.130095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.130099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.130112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.139959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.140026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.140043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.140049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.140053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.140065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.149963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.150031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.150045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.150050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.150055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.150066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.160010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.160081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.160094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.160099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.160104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.160115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.170049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.170127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.170139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.170145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.170150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.170161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.180053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.180129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.180141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.180147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.180154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.180166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.190125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.190201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.190215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.190220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.190225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.190236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.200138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.200213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.200226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.200231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.200236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.200247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.210155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.210229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.210242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.210247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.210251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.210263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.220076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 21:05:30.220152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 21:05:30.220165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 21:05:30.220171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 21:05:30.220175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.385 [2024-07-15 21:05:30.220186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 21:05:30.230214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 21:05:30.230332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 21:05:30.230344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 21:05:30.230350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 21:05:30.230354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.386 [2024-07-15 21:05:30.230365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 21:05:30.240267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 21:05:30.240339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 21:05:30.240352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 21:05:30.240358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 21:05:30.240362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.386 [2024-07-15 21:05:30.240374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 21:05:30.250291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 21:05:30.250363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 21:05:30.250375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 21:05:30.250381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 21:05:30.250385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.386 [2024-07-15 21:05:30.250396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 21:05:30.260323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 21:05:30.260391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 21:05:30.260404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 21:05:30.260409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 21:05:30.260413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.386 [2024-07-15 21:05:30.260424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 21:05:30.270266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 21:05:30.270339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 21:05:30.270351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 21:05:30.270359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 21:05:30.270363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.386 [2024-07-15 21:05:30.270374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.280397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.280471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.280483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.280488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.280493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.280503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.290373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.290444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.290456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.290461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.290466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.290477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.300468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.300584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.300597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.300602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.300606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.300617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.310428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.310506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.310519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.310524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.310528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.310539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.320456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.320527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.320540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.320545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.320549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.320560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.330481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.330556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.330568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.330573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.330578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.330589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.340419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.340486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.340498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.340503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.340507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.340518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.350541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.350612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.350625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.350630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.350634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.350645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.360570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.360645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.360657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.360665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.360670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.360680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.370620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.370707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.370719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.370724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.370728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.370739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.380629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.380697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.380709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.380714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.380719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.380730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.390669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.390741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.390753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.390758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.390762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.390773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.400715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.400798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.400817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.400823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.400828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.400843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.410727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.410807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.410826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.410832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.410836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.410851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.420759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.420848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.420861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.420866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.420870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.420883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.430824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.430909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.430928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.430934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.430939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.430953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.654 qpair failed and we were unable to recover it. 00:29:26.654 [2024-07-15 21:05:30.440833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.654 [2024-07-15 21:05:30.440909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.654 [2024-07-15 21:05:30.440928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.654 [2024-07-15 21:05:30.440934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.654 [2024-07-15 21:05:30.440939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.654 [2024-07-15 21:05:30.440953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.450848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.450931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.450953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.450960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.450964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.450979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.460865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.460932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.460946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.460952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.460956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.460968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.470906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.470971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.470983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.470989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.470993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.471004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.480916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.480987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.480999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.481004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.481008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.481019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.490937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.491007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.491020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.491025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.491029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.491043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.500965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.501041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.501054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.501059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.501063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.501073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.511030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.511100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.511112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.511117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.511126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.511138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.521033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.521104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.521116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.521121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.521130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.521141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.531012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.531090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.531102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.531106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.531111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.531124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.655 [2024-07-15 21:05:30.541061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.655 [2024-07-15 21:05:30.541134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.655 [2024-07-15 21:05:30.541151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.655 [2024-07-15 21:05:30.541156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.655 [2024-07-15 21:05:30.541161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.655 [2024-07-15 21:05:30.541172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.655 qpair failed and we were unable to recover it. 00:29:26.916 [2024-07-15 21:05:30.551075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.916 [2024-07-15 21:05:30.551149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.916 [2024-07-15 21:05:30.551163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.916 [2024-07-15 21:05:30.551168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.916 [2024-07-15 21:05:30.551173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.916 [2024-07-15 21:05:30.551185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.916 qpair failed and we were unable to recover it. 00:29:26.916 [2024-07-15 21:05:30.561131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.916 [2024-07-15 21:05:30.561302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.916 [2024-07-15 21:05:30.561314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.916 [2024-07-15 21:05:30.561320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.916 [2024-07-15 21:05:30.561324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.916 [2024-07-15 21:05:30.561335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.916 qpair failed and we were unable to recover it. 00:29:26.916 [2024-07-15 21:05:30.571191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.916 [2024-07-15 21:05:30.571270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.916 [2024-07-15 21:05:30.571283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.916 [2024-07-15 21:05:30.571288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.916 [2024-07-15 21:05:30.571292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.916 [2024-07-15 21:05:30.571303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.581182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.581295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.581307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.581312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.581319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.581330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.591215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.591282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.591296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.591301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.591305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.591317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.601275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.601352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.601365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.601370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.601374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.601385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.611233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.611337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.611350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.611355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.611359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.611370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.621297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.621368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.621380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.621385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.621389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.621400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.631320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.631393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.631406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.631411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.631415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.631426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.641374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.641445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.641457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.641462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.641466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.641477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.651487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.651570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.651582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.651587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.651591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.651602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.661425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.661511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.661523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.661528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.661532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.661543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.671477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.671585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.671597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.671602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.671610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.671622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.681368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.681436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.681448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.681454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.681459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.917 [2024-07-15 21:05:30.681469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.917 qpair failed and we were unable to recover it. 00:29:26.917 [2024-07-15 21:05:30.691493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.917 [2024-07-15 21:05:30.691580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.917 [2024-07-15 21:05:30.691592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.917 [2024-07-15 21:05:30.691597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.917 [2024-07-15 21:05:30.691602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.691613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.701520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.701590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.701603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.701608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.701612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.701623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.711551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.711622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.711635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.711640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.711644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.711655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.721584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.721678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.721690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.721695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.721700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.721710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.731646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.731759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.731771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.731776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.731781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.731792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.741655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.741728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.741740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.741745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.741750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.741760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.751651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.751723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.751742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.751748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.751754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.751768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.761690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.761763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.761782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.761792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.761797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.761811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.771642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.771745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.771764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.771771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.771776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.771790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.781732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.781808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.781827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.781833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.781838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.781852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.791763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.791832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.791845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.791850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.791855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.791866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:26.918 [2024-07-15 21:05:30.801842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.918 [2024-07-15 21:05:30.801921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.918 [2024-07-15 21:05:30.801934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.918 [2024-07-15 21:05:30.801939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.918 [2024-07-15 21:05:30.801943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:26.918 [2024-07-15 21:05:30.801954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.918 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.811819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.811892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.811905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.811910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.811914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.181 [2024-07-15 21:05:30.811926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.181 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.821744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.821814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.821827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.821832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.821836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.181 [2024-07-15 21:05:30.821847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.181 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.831903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.831969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.831981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.831986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.831991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.181 [2024-07-15 21:05:30.832002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.181 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.841918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.841986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.841998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.842003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.842008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.181 [2024-07-15 21:05:30.842019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.181 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.851931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.852004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.852019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.852024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.852029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.181 [2024-07-15 21:05:30.852040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.181 qpair failed and we were unable to recover it. 00:29:27.181 [2024-07-15 21:05:30.861967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.181 [2024-07-15 21:05:30.862038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.181 [2024-07-15 21:05:30.862050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.181 [2024-07-15 21:05:30.862056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.181 [2024-07-15 21:05:30.862060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.862071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.871991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.872074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.872086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.872092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.872096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.872107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.882034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.882135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.882147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.882152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.882157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.882168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.892037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.892108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.892120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.892129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.892134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.892148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.902091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.902160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.902173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.902179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.902183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.902194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.912131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.912201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.912213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.912218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.912222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.912233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.922166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.922235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.922248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.922253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.922257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.922268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.932165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.932239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.932252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.932257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.932261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.932272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.942153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.942218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.942233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.942239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.942243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.942254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.952267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.952340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.952352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.952357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.952361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.952373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.962266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.962336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.962349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.962355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.962359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.962371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.972322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.972437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.972450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.972455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.972460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.972471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.982315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.982389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.982401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.982406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.982413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.982424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:30.992368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:30.992449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:30.992461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:30.992466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:30.992470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:30.992481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:31.002374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.182 [2024-07-15 21:05:31.002446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.182 [2024-07-15 21:05:31.002458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.182 [2024-07-15 21:05:31.002463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.182 [2024-07-15 21:05:31.002467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.182 [2024-07-15 21:05:31.002478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.182 qpair failed and we were unable to recover it. 00:29:27.182 [2024-07-15 21:05:31.012413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.012494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.012507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.012512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.012516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.012527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.183 [2024-07-15 21:05:31.022443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.022516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.022528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.022532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.022537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.022548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.183 [2024-07-15 21:05:31.032454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.032525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.032537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.032542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.032546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.032557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.183 [2024-07-15 21:05:31.042504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.042572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.042584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.042589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.042593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.042604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.183 [2024-07-15 21:05:31.052407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.052483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.052495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.052500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.052504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.052515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.183 [2024-07-15 21:05:31.062534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.183 [2024-07-15 21:05:31.062599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.183 [2024-07-15 21:05:31.062611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.183 [2024-07-15 21:05:31.062616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-07-15 21:05:31.062620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.183 [2024-07-15 21:05:31.062631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.183 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.072558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.072621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.072634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.072639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.072649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.072660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.082626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.082705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.082717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.082722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.082726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.082737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.092623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.092700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.092713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.092718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.092723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.092733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.102668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.102772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.102784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.102789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.102794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.102805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.112680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.112785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.112798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.112803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.112808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.112819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.122702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.122771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.122784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.122789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.122793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.122804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.132733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.132806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.132818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.132823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.132828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.132839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.142779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.142849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.142862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.142867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.142871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.445 [2024-07-15 21:05:31.142882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.445 qpair failed and we were unable to recover it. 00:29:27.445 [2024-07-15 21:05:31.152763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.445 [2024-07-15 21:05:31.152834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.445 [2024-07-15 21:05:31.152847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.445 [2024-07-15 21:05:31.152853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.445 [2024-07-15 21:05:31.152859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.152872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.162850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.162927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.162939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.162947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.162952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.162963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.172833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.172928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.172940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.172946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.172950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.172961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.182842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.182911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.182923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.182928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.182933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.182943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.192781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.192854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.192867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.192872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.192876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.192887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.202924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.202998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.203010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.203015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.203020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.203030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.212942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.213016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.213028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.213033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.213038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.213049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.222998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.223095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.223107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.223112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.223117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.223133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.233023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.233091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.233103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.233108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.233113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.233128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.243045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.243115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.243133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.243138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.243142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.243154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.253090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.253167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.253182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.253187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.253192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.253203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.263086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.263202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.263215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.263221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.263225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.263236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.273124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.273232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.273245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.273250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.273254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.273265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.283118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.283196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.283209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.283214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.283218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.283230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.293188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.446 [2024-07-15 21:05:31.293258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.446 [2024-07-15 21:05:31.293271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.446 [2024-07-15 21:05:31.293276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.446 [2024-07-15 21:05:31.293280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.446 [2024-07-15 21:05:31.293294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.446 qpair failed and we were unable to recover it. 00:29:27.446 [2024-07-15 21:05:31.303100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.447 [2024-07-15 21:05:31.303170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.447 [2024-07-15 21:05:31.303183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.447 [2024-07-15 21:05:31.303188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.447 [2024-07-15 21:05:31.303193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.447 [2024-07-15 21:05:31.303205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.447 qpair failed and we were unable to recover it. 00:29:27.447 [2024-07-15 21:05:31.313255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.447 [2024-07-15 21:05:31.313326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.447 [2024-07-15 21:05:31.313339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.447 [2024-07-15 21:05:31.313344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.447 [2024-07-15 21:05:31.313348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.447 [2024-07-15 21:05:31.313360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.447 qpair failed and we were unable to recover it. 00:29:27.447 [2024-07-15 21:05:31.323307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.447 [2024-07-15 21:05:31.323398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.447 [2024-07-15 21:05:31.323410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.447 [2024-07-15 21:05:31.323415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.447 [2024-07-15 21:05:31.323420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.447 [2024-07-15 21:05:31.323431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.447 qpair failed and we were unable to recover it. 00:29:27.447 [2024-07-15 21:05:31.333297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.447 [2024-07-15 21:05:31.333370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.447 [2024-07-15 21:05:31.333382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.447 [2024-07-15 21:05:31.333387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.447 [2024-07-15 21:05:31.333391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.447 [2024-07-15 21:05:31.333402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.447 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.343220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.343290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.343306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.343311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.343316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.343327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.353252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.353317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.353330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.353335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.353340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.353350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.363473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.363548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.363560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.363565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.363569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.363580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.373445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.373527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.373540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.373545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.373549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.373560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.383441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.383505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.383518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.383523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.383527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.383540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.393481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.393571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.393584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.393589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.393593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.393604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.403539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.403605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.403617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.403622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.403626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.403637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.413454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.413548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.413562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.413567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.413571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.413583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.423552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.423618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.423631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.423636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.423641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.423652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.433578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.433656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.433668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.433673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.433678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.433688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.443606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.443675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.443687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.443692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.443696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.443707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.453617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.453689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.453702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.453707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.453711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.453722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.463667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.463732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.463744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.463749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.463754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.463765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.473678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.473767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.473787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.473793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.473802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.473816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.483717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.483793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.483811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.483818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.483822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.711 [2024-07-15 21:05:31.483837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.711 qpair failed and we were unable to recover it. 00:29:27.711 [2024-07-15 21:05:31.493638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.711 [2024-07-15 21:05:31.493710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.711 [2024-07-15 21:05:31.493724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.711 [2024-07-15 21:05:31.493729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.711 [2024-07-15 21:05:31.493733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.493745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.503930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.504001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.504013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.504018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.504022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.504033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.513767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.513838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.513857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.513863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.513868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.513882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.523895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.524001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.524020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.524026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.524031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.524046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.533837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.533910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.533924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.533929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.533933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.533944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.543914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.543982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.543995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.544000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.544004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.544016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.553912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.553992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.554005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.554010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.554014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.554025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.563969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.564039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.564051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.564060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.564065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.564076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.573979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.574058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.574071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.574076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.574080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.574091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.584009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.584073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.584086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.584091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.584095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.584106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.712 [2024-07-15 21:05:31.594016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.712 [2024-07-15 21:05:31.594082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.712 [2024-07-15 21:05:31.594094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.712 [2024-07-15 21:05:31.594099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.712 [2024-07-15 21:05:31.594104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.712 [2024-07-15 21:05:31.594114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.712 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.604079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.604154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.604167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.604172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.604176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.604188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.613956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.614046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.614059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.614064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.614068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.614079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.624081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.624160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.624173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.624178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.624182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.624193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.634170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.634245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.634258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.634263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.634267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.634280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.644164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.644236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.644249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.644254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.644258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.644269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.974 [2024-07-15 21:05:31.654078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.974 [2024-07-15 21:05:31.654180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.974 [2024-07-15 21:05:31.654193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.974 [2024-07-15 21:05:31.654201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.974 [2024-07-15 21:05:31.654206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.974 [2024-07-15 21:05:31.654217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.974 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.664300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.664371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.664384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.664389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.664393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.664405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.674237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.674304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.674316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.674321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.674325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.674336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.684230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.684305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.684318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.684323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.684327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.684338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.694312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.694386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.694399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.694404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.694408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.694419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.704323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.704399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.704411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.704417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.704421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.704433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.714354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.714425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.714438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.714443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.714447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.714458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.724383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.724453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.724465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.724471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.724475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.724486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.734298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.734376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.734389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.734394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.734399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.734410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.744449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.744519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.744535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.744540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.744544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.744555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.754462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.754561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.754574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.754579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.754583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.754594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.764475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.764582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.764594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.764599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.764603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.764615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.774616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.774690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.774703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.774708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.774712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.774723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.784580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.784647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.784659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.784665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.784669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.784683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.975 [2024-07-15 21:05:31.794565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.975 [2024-07-15 21:05:31.794641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.975 [2024-07-15 21:05:31.794654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.975 [2024-07-15 21:05:31.794659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.975 [2024-07-15 21:05:31.794663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.975 [2024-07-15 21:05:31.794674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.975 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.804609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.804682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.804694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.804699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.804703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.804715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.814625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.814697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.814710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.814715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.814719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.814730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.824728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.824845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.824858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.824864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.824868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.824879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.834698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.834766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.834791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.834798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.834802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.834816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.844719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.844795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.844814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.844820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.844824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.844839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.854753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.854831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.854850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.854856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.854861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.854875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:27.976 [2024-07-15 21:05:31.864776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.976 [2024-07-15 21:05:31.864847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.976 [2024-07-15 21:05:31.864866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.976 [2024-07-15 21:05:31.864872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.976 [2024-07-15 21:05:31.864876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:27.976 [2024-07-15 21:05:31.864891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.976 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.874796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.874865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.874879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.874885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.874893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.874905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.884870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.884962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.884981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.884987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.884991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.885006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.894841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.894908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.894922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.894927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.894931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.894943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.904881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.904947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.904960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.904965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.904969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.904980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.914906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.914972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.914985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.914990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.914994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.915005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.925037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.925118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.925133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.238 [2024-07-15 21:05:31.925139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.238 [2024-07-15 21:05:31.925143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.238 [2024-07-15 21:05:31.925154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.238 qpair failed and we were unable to recover it. 00:29:28.238 [2024-07-15 21:05:31.934952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.238 [2024-07-15 21:05:31.935024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.238 [2024-07-15 21:05:31.935037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.935042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.935046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.935057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.944989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.945057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.945069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.945074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.945079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.945090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.955074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.955143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.955156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.955161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.955166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.955177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.964942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.965011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.965023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.965032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.965036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.965047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.975063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.975237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.975251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.975256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.975260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.975272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.985093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.985164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.985177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.985182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.985186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.985197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:31.995135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:31.995236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:31.995248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:31.995253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:31.995257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:31.995269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.005059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.005130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.005143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.005148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:32.005153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:32.005164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.015164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.015234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.015246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.015252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:32.015256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:32.015267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.025203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.025274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.025286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.025291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:32.025296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:32.025306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.035183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.035271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.035285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.035290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:32.035294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:32.035306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.045280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.045350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.045363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.045368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.239 [2024-07-15 21:05:32.045372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.239 [2024-07-15 21:05:32.045384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.239 qpair failed and we were unable to recover it. 00:29:28.239 [2024-07-15 21:05:32.055208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.239 [2024-07-15 21:05:32.055276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.239 [2024-07-15 21:05:32.055288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.239 [2024-07-15 21:05:32.055296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.055301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.055312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.065346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.065419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.065431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.065436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.065440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.065451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.075349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.075418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.075430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.075435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.075439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.075450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.085334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.085403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.085415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.085420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.085424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.085435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.095289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.095396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.095409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.095414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.095418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.095429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.105448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.105515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.105527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.105532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.105536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.105547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.115448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.115518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.115530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.115535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.115539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.115550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.240 [2024-07-15 21:05:32.125476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.240 [2024-07-15 21:05:32.125545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.240 [2024-07-15 21:05:32.125557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.240 [2024-07-15 21:05:32.125562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.240 [2024-07-15 21:05:32.125566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.240 [2024-07-15 21:05:32.125577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.240 qpair failed and we were unable to recover it. 00:29:28.502 [2024-07-15 21:05:32.135426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.502 [2024-07-15 21:05:32.135492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.502 [2024-07-15 21:05:32.135504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.502 [2024-07-15 21:05:32.135509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.502 [2024-07-15 21:05:32.135514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.502 [2024-07-15 21:05:32.135524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-07-15 21:05:32.145521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.502 [2024-07-15 21:05:32.145588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.502 [2024-07-15 21:05:32.145603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.502 [2024-07-15 21:05:32.145608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.502 [2024-07-15 21:05:32.145612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.502 [2024-07-15 21:05:32.145623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.155491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.155573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.155585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.155590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.155594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.155605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.165484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.165555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.165568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.165573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.165577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.165588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.175612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.175702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.175714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.175719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.175723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.175734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.185631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.185697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.185709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.185714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.185718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.185732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.195678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.195744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.195757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.195762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.195766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.195777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.205808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.205879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.205891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.205896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.205900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.205911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.215678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.215749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.215762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.215767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.215772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.215783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.225753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.225819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.225832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.225837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.225841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.225852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.235771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.235877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.235893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.235898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.235902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.235913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.245833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.245902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.245914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.245919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.245923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.245934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.255802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.255868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.255881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.255886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.255890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.255901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.265867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.503 [2024-07-15 21:05:32.265934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.503 [2024-07-15 21:05:32.265946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.503 [2024-07-15 21:05:32.265951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.503 [2024-07-15 21:05:32.265956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.503 [2024-07-15 21:05:32.265966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-07-15 21:05:32.275912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.276012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.276024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.276029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.276037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.276048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.285915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.285986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.285998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.286003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.286007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.286018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.295860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.295929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.295941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.295946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.295950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.295961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.305969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.306039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.306052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.306056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.306061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.306071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.315883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.315951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.315964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.315970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.315975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.315987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.326084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.326209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.326222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.326228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.326232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.326243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.335917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.336001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.336013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.336018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.336022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.336033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.346078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.346180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.346193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.346199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.346203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.346214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.356095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.356172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.356184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.356190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.356195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.356207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.366134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.366204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.366217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.366222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.366229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.366240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.376119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.376187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.376199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.376204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.376209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.376220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-07-15 21:05:32.386188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.504 [2024-07-15 21:05:32.386252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.504 [2024-07-15 21:05:32.386264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.504 [2024-07-15 21:05:32.386269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.504 [2024-07-15 21:05:32.386274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.504 [2024-07-15 21:05:32.386284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.396255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.396323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.396336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.396342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.396347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.396358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.406273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.406357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.406370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.406375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.406379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.406391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.416262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.416345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.416357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.416362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.416367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.416378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.426312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.426407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.426419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.426425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.426429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.426441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.436378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.436449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.436462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.436466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.436471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.436482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.446389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.446459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.446471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.767 [2024-07-15 21:05:32.446477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.767 [2024-07-15 21:05:32.446481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.767 [2024-07-15 21:05:32.446492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-07-15 21:05:32.456370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.767 [2024-07-15 21:05:32.456436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.767 [2024-07-15 21:05:32.456448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.456456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.456460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.456471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.466520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.466586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.466599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.466604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.466608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.466619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.476466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.476556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.476569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.476574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.476578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.476589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.486479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.486551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.486563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.486568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.486572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.486583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.496468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.496536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.496548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.496553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.496558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.496568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.506525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.506587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.506600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.506605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.506609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.506620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.516558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.516622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.516635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.516640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.516644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.516655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.526585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.526654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.526666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.526671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.526676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.526686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.536629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.536704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.536716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.536722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.536726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.536737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.546648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.546720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.546735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.546740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.546745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.546756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.556671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.556742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.556761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.556767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.556772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.556786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.566736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.566812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.566831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.768 [2024-07-15 21:05:32.566837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.768 [2024-07-15 21:05:32.566842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.768 [2024-07-15 21:05:32.566856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-07-15 21:05:32.576670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.768 [2024-07-15 21:05:32.576739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.768 [2024-07-15 21:05:32.576753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.576758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.576762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.576774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.586747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.586861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.586881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.586887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.586891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.586912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.596768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.596837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.596856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.596863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.596867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.596881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.606806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.606878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.606897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.606903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.606908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.606922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.616785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.616859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.616878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.616884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.616889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.616904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.626848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.626918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.626937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.626943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.626948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.626962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.636910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.636978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.636996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.637001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.637006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.637017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.646945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.647016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.647028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.647033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.647038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.647049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-07-15 21:05:32.656977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.769 [2024-07-15 21:05:32.657047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.769 [2024-07-15 21:05:32.657060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.769 [2024-07-15 21:05:32.657065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.769 [2024-07-15 21:05:32.657069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:28.769 [2024-07-15 21:05:32.657080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.769 qpair failed and we were unable to recover it. 00:29:29.031 [2024-07-15 21:05:32.666978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.031 [2024-07-15 21:05:32.667150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.031 [2024-07-15 21:05:32.667166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.031 [2024-07-15 21:05:32.667171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.031 [2024-07-15 21:05:32.667176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.031 [2024-07-15 21:05:32.667187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.031 qpair failed and we were unable to recover it. 00:29:29.031 [2024-07-15 21:05:32.677000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.031 [2024-07-15 21:05:32.677070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.031 [2024-07-15 21:05:32.677082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.031 [2024-07-15 21:05:32.677087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.031 [2024-07-15 21:05:32.677094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.031 [2024-07-15 21:05:32.677105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.031 qpair failed and we were unable to recover it. 00:29:29.031 [2024-07-15 21:05:32.686998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.031 [2024-07-15 21:05:32.687066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.031 [2024-07-15 21:05:32.687078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.031 [2024-07-15 21:05:32.687083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.031 [2024-07-15 21:05:32.687087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.031 [2024-07-15 21:05:32.687099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.031 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.696910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.696975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.696987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.696992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.696996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.697007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.707085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.707154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.707166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.707171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.707176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.707187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.717118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.717194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.717208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.717215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.717220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.717232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.727156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.727236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.727248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.727253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.727257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.727269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.737028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.737097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.737109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.737114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.737118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.737133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.747205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.747302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.747314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.747319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.747324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.747335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.757244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.757315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.757327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.757332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.757336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.757347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.767248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.767324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.767336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.767341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.767349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.767360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.777276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.777348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.777361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.777366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.777370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.777381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.787305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.787373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.787386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.787390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.787395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.787406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.797367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.797440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.797453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.797458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.797462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.797472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.032 [2024-07-15 21:05:32.807378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.032 [2024-07-15 21:05:32.807462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.032 [2024-07-15 21:05:32.807474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.032 [2024-07-15 21:05:32.807479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.032 [2024-07-15 21:05:32.807483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.032 [2024-07-15 21:05:32.807494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.032 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.817277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.817343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.817356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.817361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.817365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.817376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.827467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.827549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.827561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.827566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.827570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.827582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.837456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.837528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.837540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.837545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.837549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.837560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.847412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.847500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.847512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.847517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.847522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.847533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.857482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.857550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.857562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.857570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.857574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.857585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.867557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.867623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.867635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.867640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.867645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.867656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.877588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.877663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.877676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.877681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.877686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.877696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.887602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.887670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.887682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.887688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.887692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.887703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.897583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.897655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.897668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.897673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.897677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.897688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.907610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.907679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.907692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.907697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.907702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.907714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.033 [2024-07-15 21:05:32.917705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.033 [2024-07-15 21:05:32.917787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.033 [2024-07-15 21:05:32.917800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.033 [2024-07-15 21:05:32.917806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.033 [2024-07-15 21:05:32.917810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.033 [2024-07-15 21:05:32.917821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.033 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.927704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.927777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.927789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.927794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.927798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.927809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.937739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.937815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.937827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.937832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.937837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.937847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.947858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.947956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.947972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.947977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.947982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.947992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.957808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.957904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.957917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.957921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.957926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.957936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.967842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.967914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.967927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.967932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.967936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.967947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.977829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.977924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.977937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.977942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.977946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.977957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.987938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.988007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.988019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.988024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.988029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.988043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:32.997912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:32.997979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:32.997991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:32.997996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:32.998000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:32.998011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:33.007993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:33.008076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:33.008089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:33.008094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:33.008098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:33.008109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:33.017932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:33.018001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:33.018014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:33.018019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:33.018023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:33.018034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:33.028033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:33.028099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:33.028112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:33.028117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:33.028125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.295 [2024-07-15 21:05:33.028136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.295 qpair failed and we were unable to recover it. 00:29:29.295 [2024-07-15 21:05:33.038035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.295 [2024-07-15 21:05:33.038106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.295 [2024-07-15 21:05:33.038125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.295 [2024-07-15 21:05:33.038131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.295 [2024-07-15 21:05:33.038135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.038146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.048073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.048154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.048168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.048173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.048177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.048190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.057925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.058044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.058057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.058062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.058066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.058077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.068097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.068167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.068179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.068185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.068189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.068200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.078055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.078126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.078138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.078143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.078147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.078162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.088181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.088278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.088291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.088296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.088301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.088312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.098220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.098304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.098316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.098321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.098325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.098337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.108238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.108303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.108319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.108324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.108328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.108340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.118241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.118309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.118321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.118326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.118330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.118342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.128361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.128461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.128474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.128478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.128483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.128494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.138268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.138336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.138348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.138353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.138357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.138368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.148387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.148479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.148491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.148496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.148500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.148512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.158366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.158433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.158445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.158450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.158454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.158465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.168416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.168491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.168503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.168508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.168515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.168526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.296 qpair failed and we were unable to recover it. 00:29:29.296 [2024-07-15 21:05:33.178401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.296 [2024-07-15 21:05:33.178471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.296 [2024-07-15 21:05:33.178483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.296 [2024-07-15 21:05:33.178488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.296 [2024-07-15 21:05:33.178492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.296 [2024-07-15 21:05:33.178503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.297 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.188475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.188549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.188561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.188566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.188571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.188582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.198472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.198539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.198552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.198557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.198561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.198572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.208518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.208589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.208601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.208606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.208610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.208621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.218512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.218618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.218630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.218635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.218640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.218651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.228551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.228621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.228633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.228638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.228642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.228653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.238564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.238631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.238643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.238648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.238652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.238663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.248617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.248685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.248697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.248702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.248706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.248717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.258596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.258665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.258677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.258685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.258689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.258700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.268656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.268722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.268734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.268739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.268743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.268754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.278719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.278825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.278837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.278842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.278846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.278858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.288618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.288693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.288712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.563 [2024-07-15 21:05:33.288718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.563 [2024-07-15 21:05:33.288723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.563 [2024-07-15 21:05:33.288738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.563 qpair failed and we were unable to recover it. 00:29:29.563 [2024-07-15 21:05:33.298713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.563 [2024-07-15 21:05:33.298789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.563 [2024-07-15 21:05:33.298808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.298815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.298819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.298833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.308772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.308846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.308865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.308871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.308875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.308890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.318811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.318885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.318904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.318910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.318915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.318929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.328846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.328949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.328968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.328974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.328979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.328993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.338851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.338935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.338949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.338954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.338959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.338970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.348862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.348929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.348941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.348950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.348954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.348966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.358933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.359024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.359037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.359042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.359046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.359057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.368936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.369004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.369016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.369021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.369026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.369037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.378916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.379012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.379025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.379030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.379034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.379045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.388982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.389051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.389064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.389069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.389073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.389084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.399032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.399103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.399116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.399127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.399131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.399143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.409000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.409104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.409116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.409125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.409131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.409142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.419017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.419086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.419098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.419104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.419108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.419119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.428979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.429047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.429060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.429065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.429069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.429080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.439131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.439238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.439254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.439259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.439264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.439275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.564 [2024-07-15 21:05:33.449155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.564 [2024-07-15 21:05:33.449227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.564 [2024-07-15 21:05:33.449239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.564 [2024-07-15 21:05:33.449245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.564 [2024-07-15 21:05:33.449249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.564 [2024-07-15 21:05:33.449261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.564 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.459139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.459205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.459217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.459222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.459227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.459238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.469205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.469277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.469290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.469295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.469299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.469310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.479259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.479328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.479340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.479345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.479350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.479363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.489289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.489379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.489392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.489397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.489401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.489413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.499263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.499337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.499349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.499354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.499359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.499370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.509297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.509363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.509375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.509380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.509384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.509396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.519226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.519295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.519308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.519313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.519317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.519329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.529366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.529434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.529449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.529455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.529459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.529470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.539395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.539492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.539505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.539509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.539514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.539524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.549421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.549505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.549517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.549522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.549526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.549537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.559443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.559507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.559520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.559525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.559529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.559540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.569471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.569543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.569555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.569560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.569567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.569578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.579451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.579520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.579532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.579537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.579541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.579552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.589541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.589612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.589625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.589630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.589634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.589646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.599546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.599608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.599620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.599625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.599630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.599640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.609626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.609704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.609716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.609721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.609725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.609736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.619580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.619696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.619716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.619722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.619727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.619741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.629641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.629712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.826 [2024-07-15 21:05:33.629731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.826 [2024-07-15 21:05:33.629738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.826 [2024-07-15 21:05:33.629742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.826 [2024-07-15 21:05:33.629757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.826 qpair failed and we were unable to recover it. 00:29:29.826 [2024-07-15 21:05:33.639560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.826 [2024-07-15 21:05:33.639656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.639670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.639675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.639680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.639693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.649694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.649772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.649791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.649797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.649801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.649816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.659673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.659747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.659766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.659775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.659780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.659794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.669756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.669824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.669843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.669849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.669854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.669868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.679762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.679833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.679851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.679858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.679862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.679876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.689805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.689878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.689891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.689896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.689900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.689912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.699796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.699868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.699880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.699886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.699890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.699901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:29.827 [2024-07-15 21:05:33.709739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.827 [2024-07-15 21:05:33.709830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.827 [2024-07-15 21:05:33.709842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.827 [2024-07-15 21:05:33.709848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.827 [2024-07-15 21:05:33.709852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:29.827 [2024-07-15 21:05:33.709863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.827 qpair failed and we were unable to recover it. 00:29:30.088 [2024-07-15 21:05:33.719911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.088 [2024-07-15 21:05:33.719980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.088 [2024-07-15 21:05:33.719993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.088 [2024-07-15 21:05:33.719998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.088 [2024-07-15 21:05:33.720002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.088 [2024-07-15 21:05:33.720013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-07-15 21:05:33.729910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.088 [2024-07-15 21:05:33.729985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.088 [2024-07-15 21:05:33.729998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.088 [2024-07-15 21:05:33.730004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.088 [2024-07-15 21:05:33.730008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.088 [2024-07-15 21:05:33.730019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-07-15 21:05:33.739930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.088 [2024-07-15 21:05:33.740005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.088 [2024-07-15 21:05:33.740019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.088 [2024-07-15 21:05:33.740026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.088 [2024-07-15 21:05:33.740030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.740042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.749943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.750010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.750023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.750031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.750035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.750046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.759988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.760060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.760073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.760078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.760082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.760093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.770034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.770104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.770116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.770125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.770130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.770141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.779999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.780063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.780075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.780080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.780085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.780095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.790054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.790155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.790168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.790173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.790177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.790188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.800084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.800153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.800165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.800170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.800175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.800186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.810134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.810224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.810239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.810245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.810249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.810261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.820111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.820183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.820196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.820201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.820205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.820216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.830170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.830236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.830248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.830253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.830257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.830268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.840199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.840265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.840279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.840285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.840289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.840300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.850271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.850353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.850365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.850370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.850374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.850385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-07-15 21:05:33.860113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.089 [2024-07-15 21:05:33.860188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.089 [2024-07-15 21:05:33.860200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.089 [2024-07-15 21:05:33.860205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.089 [2024-07-15 21:05:33.860210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.089 [2024-07-15 21:05:33.860220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.870409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.870581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.870593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.870597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.870602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.870612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.880325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.880392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.880404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.880409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.880414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.880428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.890240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.890315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.890327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.890332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.890336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.890347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.900340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.900409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.900421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.900426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.900431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.900442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.910383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.910453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.910465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.910470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.910474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.910486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.920428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.920495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.920508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.920513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.920517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.920528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.930461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.930537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.930552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.930557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.930562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.930572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.940450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.940522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.940534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.940539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.940543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.940554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.950554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.950632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.950644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.950649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.950653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.950664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.960557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.960623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.960635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.960640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.960645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.960655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-07-15 21:05:33.970591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.090 [2024-07-15 21:05:33.970700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.090 [2024-07-15 21:05:33.970712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.090 [2024-07-15 21:05:33.970718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.090 [2024-07-15 21:05:33.970725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.090 [2024-07-15 21:05:33.970736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:33.980451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:33.980521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:33.980533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:33.980538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:33.980543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:33.980554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:33.990720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:33.990803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:33.990815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:33.990820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:33.990824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:33.990835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.000659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.000731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.000750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.000756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.000761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.000775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.010729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.010852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.010871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.010878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.010882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.010896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.020743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.020817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.020830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.020835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.020839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.020851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.030760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.030826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.030839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.030844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.030848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.030860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.040781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.040848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.040861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.040866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.040870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.040882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.050787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.050855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.050868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.050873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.050877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.050888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.060745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.060812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.060824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.060829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.060840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.060851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.070734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.070829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.070841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.353 [2024-07-15 21:05:34.070846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.353 [2024-07-15 21:05:34.070850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.353 [2024-07-15 21:05:34.070861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.353 qpair failed and we were unable to recover it. 00:29:30.353 [2024-07-15 21:05:34.080884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.353 [2024-07-15 21:05:34.080952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.353 [2024-07-15 21:05:34.080965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.080970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.080974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.080985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.090967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.091034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.091046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.091052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.091056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.091067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.100877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.100981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.100993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.100999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.101003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.101014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.110972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.111036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.111048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.111053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.111058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.111068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.120972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.121041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.121053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.121058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.121062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.121073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.131027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.131152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.131165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.131170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.131174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.131186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.140979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.141056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.141068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.141073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.141078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.141088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.151056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.151148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.151160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.151169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.151174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.151185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.161093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.161162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.161175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.161181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.161185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.161196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.171135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.171203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.171216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.171221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.171225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.171236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.181111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.181186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.181198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.181203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.181208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.181219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.191229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.191342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.191355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.191361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.191365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.191376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.201215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.201280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.201292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.201297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.201301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.201312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.211227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.211298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.211311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.354 [2024-07-15 21:05:34.211316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.354 [2024-07-15 21:05:34.211320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.354 [2024-07-15 21:05:34.211331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.354 qpair failed and we were unable to recover it. 00:29:30.354 [2024-07-15 21:05:34.221131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.354 [2024-07-15 21:05:34.221241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.354 [2024-07-15 21:05:34.221254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.355 [2024-07-15 21:05:34.221259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.355 [2024-07-15 21:05:34.221263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.355 [2024-07-15 21:05:34.221274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.355 qpair failed and we were unable to recover it. 00:29:30.355 [2024-07-15 21:05:34.231271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.355 [2024-07-15 21:05:34.231330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.355 [2024-07-15 21:05:34.231343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.355 [2024-07-15 21:05:34.231348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.355 [2024-07-15 21:05:34.231352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.355 [2024-07-15 21:05:34.231363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.355 qpair failed and we were unable to recover it. 00:29:30.355 [2024-07-15 21:05:34.241362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.355 [2024-07-15 21:05:34.241460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.355 [2024-07-15 21:05:34.241475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.355 [2024-07-15 21:05:34.241480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.355 [2024-07-15 21:05:34.241484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.355 [2024-07-15 21:05:34.241495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.355 qpair failed and we were unable to recover it. 00:29:30.617 [2024-07-15 21:05:34.251347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.617 [2024-07-15 21:05:34.251418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.617 [2024-07-15 21:05:34.251430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.617 [2024-07-15 21:05:34.251435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.617 [2024-07-15 21:05:34.251439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.617 [2024-07-15 21:05:34.251451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.261479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.261635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.261647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.261652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.261656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.261667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.271374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.271433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.271445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.271450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.271454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.271465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.281427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.281493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.281505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.281510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.281514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.281528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.291467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.291536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.291548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.291553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.291557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.291568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.301447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.301539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.301551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.301556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.301560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.301571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.311462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.311529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.311541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.311546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.311550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.311561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.321420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.321496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.321508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.321513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.321517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.321528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.331556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.331642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.331658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.331665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.331669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.331681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.341539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.341611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.341623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.341628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.341632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.341644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.351574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.351652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.351664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.351669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.351673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.351684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.361587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.361651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.361663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.361668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.361672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.361683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.371664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.618 [2024-07-15 21:05:34.371782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.618 [2024-07-15 21:05:34.371795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.618 [2024-07-15 21:05:34.371800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.618 [2024-07-15 21:05:34.371807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.618 [2024-07-15 21:05:34.371818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.618 qpair failed and we were unable to recover it. 00:29:30.618 [2024-07-15 21:05:34.381618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.381725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.381737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.381742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.381746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.381757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.391698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.391767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.391786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.391792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.391797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.391811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.401711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.401804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.401817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.401823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.401828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.401839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.411784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.411859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.411878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.411884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.411889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.411903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.421655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.421730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.421749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.421755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.421760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.421774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.431857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.431926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.431945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.431951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.431956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.431970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.441806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.441871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.441890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.441896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.441901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.441915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.451880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.451957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.451975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.451982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.451986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.452001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.461862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.461930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.461943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.461948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.461956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.461968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.471878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.471945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.471958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.471963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.471967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.471978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.481933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.481998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.482010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.482015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.482020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.482031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.619 [2024-07-15 21:05:34.492004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.619 [2024-07-15 21:05:34.492073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.619 [2024-07-15 21:05:34.492085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.619 [2024-07-15 21:05:34.492090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.619 [2024-07-15 21:05:34.492094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.619 [2024-07-15 21:05:34.492105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.619 qpair failed and we were unable to recover it. 00:29:30.620 [2024-07-15 21:05:34.501978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.620 [2024-07-15 21:05:34.502047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.620 [2024-07-15 21:05:34.502059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.620 [2024-07-15 21:05:34.502065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.620 [2024-07-15 21:05:34.502069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.620 [2024-07-15 21:05:34.502080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.620 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.511995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.512056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.512068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.512074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.512078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.512089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.521916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.522050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.522063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.522069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.522073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.522084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.532145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.532221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.532233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.532239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.532243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.532254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.542148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.542235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.542247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.542252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.542256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.542267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.552216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.552387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.552399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.552407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.552412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.552423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.562152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.562221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.562233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.562238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.562243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.562253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.572251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.572324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.572336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.572341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.572345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.572356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.582175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.582287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.582299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.582304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.582308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.582319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.592211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.592279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.592292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.592297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.592301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.592313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.602286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.602357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.602369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.883 [2024-07-15 21:05:34.602374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.883 [2024-07-15 21:05:34.602378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.883 [2024-07-15 21:05:34.602390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.883 qpair failed and we were unable to recover it. 00:29:30.883 [2024-07-15 21:05:34.612297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.883 [2024-07-15 21:05:34.612365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.883 [2024-07-15 21:05:34.612378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.612383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.612387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.612398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.622295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.622362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.622376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.622381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.622386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.622398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.632380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.632458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.632471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.632476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.632480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.632491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.642347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.642415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.642430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.642435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.642440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.642450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.652430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.652500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.652512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.652517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.652521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.652532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.662425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.662516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.662529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.662535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.662539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.662550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.672442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.672506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.672519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.672524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.672529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.672540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.682350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.682415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.682428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.682433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.682437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.682451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.692617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.692721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.692734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.692739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.692743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.692754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.702541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.702607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.702619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.702624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.702629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.702640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.712549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.712624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.712636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.712641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.712646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.712657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.722573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.722633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.722646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.722651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.722656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.722666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.732721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.732822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.732839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.732844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.732849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.732860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.884 [2024-07-15 21:05:34.742625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.884 [2024-07-15 21:05:34.742690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.884 [2024-07-15 21:05:34.742703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.884 [2024-07-15 21:05:34.742708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.884 [2024-07-15 21:05:34.742712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.884 [2024-07-15 21:05:34.742723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.884 qpair failed and we were unable to recover it. 00:29:30.885 [2024-07-15 21:05:34.752667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.885 [2024-07-15 21:05:34.752729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.885 [2024-07-15 21:05:34.752741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.885 [2024-07-15 21:05:34.752746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.885 [2024-07-15 21:05:34.752751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.885 [2024-07-15 21:05:34.752762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.885 qpair failed and we were unable to recover it. 00:29:30.885 [2024-07-15 21:05:34.762668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.885 [2024-07-15 21:05:34.762743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.885 [2024-07-15 21:05:34.762755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.885 [2024-07-15 21:05:34.762760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.885 [2024-07-15 21:05:34.762765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.885 [2024-07-15 21:05:34.762775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.885 qpair failed and we were unable to recover it. 00:29:30.885 [2024-07-15 21:05:34.772675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.885 [2024-07-15 21:05:34.772764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.885 [2024-07-15 21:05:34.772776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.885 [2024-07-15 21:05:34.772781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.885 [2024-07-15 21:05:34.772785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:30.885 [2024-07-15 21:05:34.772802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.885 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.782738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.782812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.147 [2024-07-15 21:05:34.782825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.147 [2024-07-15 21:05:34.782830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.147 [2024-07-15 21:05:34.782834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.147 [2024-07-15 21:05:34.782845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.147 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.792800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.792874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.147 [2024-07-15 21:05:34.792887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.147 [2024-07-15 21:05:34.792894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.147 [2024-07-15 21:05:34.792898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.147 [2024-07-15 21:05:34.792910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.147 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.802804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.802870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.147 [2024-07-15 21:05:34.802883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.147 [2024-07-15 21:05:34.802888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.147 [2024-07-15 21:05:34.802893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.147 [2024-07-15 21:05:34.802904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.147 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.812893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.812964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.147 [2024-07-15 21:05:34.812976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.147 [2024-07-15 21:05:34.812981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.147 [2024-07-15 21:05:34.812985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.147 [2024-07-15 21:05:34.812996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.147 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.822860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.822933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.147 [2024-07-15 21:05:34.822946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.147 [2024-07-15 21:05:34.822951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.147 [2024-07-15 21:05:34.822955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.147 [2024-07-15 21:05:34.822966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.147 qpair failed and we were unable to recover it. 00:29:31.147 [2024-07-15 21:05:34.832869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.147 [2024-07-15 21:05:34.832932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.832944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.832949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.832953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.832964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.842899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.842978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.842991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.842996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.843000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.843011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.852978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.853050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.853062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.853067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.853071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.853082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.862865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.862933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.862946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.862951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.862958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.862970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.872980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.873048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.873060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.873065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.873070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.873081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.882893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.882960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.882973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.882978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.882982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.882993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.893091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.893163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.893176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.893181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.893185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.893196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.903060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.903162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.903175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.903180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.903184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.903196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.913149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.913218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.913230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.913235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.913240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.913251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.923118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.923187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.923199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.923204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.923209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.923220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.933197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.933268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.933279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.933285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.933289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.933300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.943250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.943339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.943351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.943356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.148 [2024-07-15 21:05:34.943360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.148 [2024-07-15 21:05:34.943371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.148 qpair failed and we were unable to recover it. 00:29:31.148 [2024-07-15 21:05:34.953216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.148 [2024-07-15 21:05:34.953282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.148 [2024-07-15 21:05:34.953294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.148 [2024-07-15 21:05:34.953302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:34.953306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:34.953317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:34.963119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:34.963189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:34.963202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:34.963207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:34.963212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:34.963223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:34.973307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:34.973377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:34.973389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:34.973394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:34.973399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:34.973410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:34.983191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:34.983403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:34.983416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:34.983421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:34.983426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:34.983436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:34.993320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:34.993387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:34.993399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:34.993404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:34.993409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:34.993419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:35.003255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:35.003360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:35.003374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:35.003381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:35.003385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:35.003397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:35.013415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:35.013511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:35.013524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:35.013529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:35.013534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:35.013545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:35.023408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:35.023479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:35.023492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:35.023497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:35.023501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:35.023511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.149 [2024-07-15 21:05:35.033529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.149 [2024-07-15 21:05:35.033598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.149 [2024-07-15 21:05:35.033610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.149 [2024-07-15 21:05:35.033615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.149 [2024-07-15 21:05:35.033619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.149 [2024-07-15 21:05:35.033630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.149 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.043460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.043522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.043534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.043543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.043547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.411 [2024-07-15 21:05:35.043558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.053521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.053590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.053602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.053608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.053612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.411 [2024-07-15 21:05:35.053623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.063538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.063645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.063658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.063664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.063668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.411 [2024-07-15 21:05:35.063680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.073519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.073622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.073635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.073640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.073644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f045c000b90 00:29:31.411 [2024-07-15 21:05:35.073655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.083584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.083706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.083733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.083741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.083749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2397220 00:29:31.411 [2024-07-15 21:05:35.083768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 [2024-07-15 21:05:35.093628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.411 [2024-07-15 21:05:35.093721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.411 [2024-07-15 21:05:35.093746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.411 [2024-07-15 21:05:35.093755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.411 [2024-07-15 21:05:35.093762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2397220 00:29:31.411 [2024-07-15 21:05:35.093782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.411 qpair failed and we were unable to recover it. 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Write completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 Read completed with error (sct=0, sc=8) 00:29:31.411 starting I/O failed 00:29:31.411 [2024-07-15 21:05:35.094663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.411 [2024-07-15 21:05:35.094711] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:31.411 A controller has encountered a failure and is being reset. 00:29:31.411 [2024-07-15 21:05:35.094750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4f20 (9): Bad file descriptor 00:29:31.411 Controller properly reset. 00:29:31.411 [2024-07-15 21:05:35.245297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147b210 is same with the state(5) to be set 00:29:31.411 Initializing NVMe Controllers 00:29:31.411 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:31.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:31.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:31.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:31.411 Initialization complete. Launching workers. 00:29:31.411 Starting thread on core 1 00:29:31.411 Starting thread on core 2 00:29:31.411 Starting thread on core 3 00:29:31.411 Starting thread on core 0 00:29:31.411 21:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:31.411 00:29:31.411 real 0m11.561s 00:29:31.411 user 0m20.778s 00:29:31.411 sys 0m4.114s 00:29:31.411 21:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:31.411 21:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.411 ************************************ 00:29:31.411 END TEST nvmf_target_disconnect_tc2 00:29:31.411 ************************************ 00:29:31.411 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:31.412 21:05:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:31.412 21:05:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:31.412 21:05:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:31.412 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:31.412 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:31.671 rmmod nvme_tcp 00:29:31.671 rmmod nvme_fabrics 00:29:31.671 rmmod nvme_keyring 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1772910 ']' 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1772910 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1772910 ']' 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1772910 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1772910 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1772910' 00:29:31.671 killing process with pid 1772910 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1772910 00:29:31.671 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1772910 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.931 21:05:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.841 21:05:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:33.841 00:29:33.841 real 0m21.545s 00:29:33.841 user 0m49.397s 00:29:33.841 sys 0m9.820s 00:29:33.841 21:05:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:33.841 21:05:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:33.841 ************************************ 00:29:33.841 END TEST nvmf_target_disconnect 00:29:33.841 ************************************ 00:29:33.841 21:05:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:33.841 21:05:37 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:33.842 21:05:37 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.842 21:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.102 21:05:37 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:34.102 00:29:34.102 real 22m37.074s 00:29:34.102 user 47m13.901s 00:29:34.102 sys 7m11.671s 00:29:34.102 21:05:37 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:34.102 21:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.102 ************************************ 00:29:34.102 END TEST nvmf_tcp 00:29:34.102 ************************************ 00:29:34.102 21:05:37 -- common/autotest_common.sh@1142 -- # return 0 00:29:34.102 21:05:37 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:34.102 21:05:37 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:34.102 21:05:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:34.102 21:05:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.102 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:29:34.102 ************************************ 00:29:34.102 START TEST spdkcli_nvmf_tcp 00:29:34.102 ************************************ 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:34.102 * Looking for test storage... 00:29:34.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1774742 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1774742 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1774742 ']' 00:29:34.102 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.103 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.103 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.103 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.103 21:05:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.103 21:05:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:34.103 [2024-07-15 21:05:37.988577] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:29:34.103 [2024-07-15 21:05:37.988667] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774742 ] 00:29:34.362 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.362 [2024-07-15 21:05:38.054403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:34.362 [2024-07-15 21:05:38.131239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.362 [2024-07-15 21:05:38.131384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:34.932 21:05:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:34.933 21:05:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:34.933 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.933 21:05:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.933 21:05:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:34.933 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:34.933 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:34.933 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:34.933 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:34.933 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:34.933 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:34.933 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:34.933 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:34.933 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:34.933 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:34.933 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:34.933 ' 00:29:37.475 [2024-07-15 21:05:41.110556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.415 [2024-07-15 21:05:42.278586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:40.960 [2024-07-15 21:05:44.416838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:42.394 [2024-07-15 21:05:46.254397] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:43.780 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:43.780 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:43.780 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:43.780 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:43.780 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:43.781 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:43.781 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:43.781 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.781 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.781 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:43.781 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:43.781 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:44.042 21:05:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:44.303 21:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.564 21:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:44.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:44.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:44.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:44.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:44.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:44.564 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:44.564 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:44.564 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:44.564 ' 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:49.857 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:49.857 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:49.857 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:49.857 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1774742 ']' 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774742' 00:29:49.857 killing process with pid 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1774742 ']' 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1774742 00:29:49.857 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1774742 ']' 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1774742 00:29:49.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1774742) - No such process 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1774742 is not found' 00:29:49.858 Process with pid 1774742 is not found 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:49.858 00:29:49.858 real 0m15.528s 00:29:49.858 user 0m32.011s 00:29:49.858 sys 0m0.704s 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.858 21:05:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.858 ************************************ 00:29:49.858 END TEST spdkcli_nvmf_tcp 00:29:49.858 ************************************ 00:29:49.858 21:05:53 -- common/autotest_common.sh@1142 -- # return 0 00:29:49.858 21:05:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:49.858 21:05:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:49.858 21:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.858 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:29:49.858 ************************************ 00:29:49.858 START TEST nvmf_identify_passthru 00:29:49.858 ************************************ 00:29:49.858 21:05:53 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:49.858 * Looking for test storage... 00:29:49.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.858 21:05:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.858 21:05:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:49.858 21:05:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.858 21:05:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.858 21:05:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.858 21:05:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.858 21:05:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.858 21:05:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:58.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:58.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.009 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:58.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:58.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:58.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:29:58.010 00:29:58.010 --- 10.0.0.2 ping statistics --- 00:29:58.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.010 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:29:58.010 00:29:58.010 --- 10.0.0.1 ping statistics --- 00:29:58.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.010 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:58.010 21:06:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:29:58.010 21:06:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:58.010 21:06:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:58.010 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:58.010 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:58.010 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:58.010 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.010 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.271 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.271 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1781900 00:29:58.271 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:58.271 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:58.271 21:06:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1781900 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1781900 ']' 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:58.271 21:06:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.271 [2024-07-15 21:06:01.974563] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:29:58.271 [2024-07-15 21:06:01.974616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.271 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.271 [2024-07-15 21:06:02.041668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.271 [2024-07-15 21:06:02.110602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.271 [2024-07-15 21:06:02.110638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.271 [2024-07-15 21:06:02.110646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.272 [2024-07-15 21:06:02.110653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.272 [2024-07-15 21:06:02.110658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.272 [2024-07-15 21:06:02.110800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.272 [2024-07-15 21:06:02.110896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.272 [2024-07-15 21:06:02.111051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.272 [2024-07-15 21:06:02.111053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:59.214 21:06:02 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.214 INFO: Log level set to 20 00:29:59.214 INFO: Requests: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "method": "nvmf_set_config", 00:29:59.214 "id": 1, 00:29:59.214 "params": { 00:29:59.214 "admin_cmd_passthru": { 00:29:59.214 "identify_ctrlr": true 00:29:59.214 } 00:29:59.214 } 00:29:59.214 } 00:29:59.214 00:29:59.214 INFO: response: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "id": 1, 00:29:59.214 "result": true 00:29:59.214 } 00:29:59.214 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.214 21:06:02 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.214 INFO: Setting log level to 20 00:29:59.214 INFO: Setting log level to 20 00:29:59.214 INFO: Log level set to 20 00:29:59.214 INFO: Log level set to 20 00:29:59.214 INFO: Requests: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "method": "framework_start_init", 00:29:59.214 "id": 1 00:29:59.214 } 00:29:59.214 00:29:59.214 INFO: Requests: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "method": "framework_start_init", 00:29:59.214 "id": 1 00:29:59.214 } 00:29:59.214 00:29:59.214 [2024-07-15 21:06:02.830539] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:59.214 INFO: response: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "id": 1, 00:29:59.214 "result": true 00:29:59.214 } 00:29:59.214 00:29:59.214 INFO: response: 00:29:59.214 { 00:29:59.214 "jsonrpc": "2.0", 00:29:59.214 "id": 1, 00:29:59.214 "result": true 00:29:59.214 } 00:29:59.214 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.214 21:06:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.214 INFO: Setting log level to 40 00:29:59.214 INFO: Setting log level to 40 00:29:59.214 INFO: Setting log level to 40 00:29:59.214 [2024-07-15 21:06:02.843858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.214 21:06:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.214 21:06:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.214 21:06:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 Nvme0n1 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 [2024-07-15 21:06:03.229392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 [ 00:29:59.478 { 00:29:59.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:59.478 "subtype": "Discovery", 00:29:59.478 "listen_addresses": [], 00:29:59.478 "allow_any_host": true, 00:29:59.478 "hosts": [] 00:29:59.478 }, 00:29:59.478 { 00:29:59.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.478 "subtype": "NVMe", 00:29:59.478 "listen_addresses": [ 00:29:59.478 { 00:29:59.478 "trtype": "TCP", 00:29:59.478 "adrfam": "IPv4", 00:29:59.478 "traddr": "10.0.0.2", 00:29:59.478 "trsvcid": "4420" 00:29:59.478 } 00:29:59.478 ], 00:29:59.478 "allow_any_host": true, 00:29:59.478 "hosts": [], 00:29:59.478 "serial_number": "SPDK00000000000001", 00:29:59.478 "model_number": "SPDK bdev Controller", 00:29:59.478 "max_namespaces": 1, 00:29:59.478 "min_cntlid": 1, 00:29:59.478 "max_cntlid": 65519, 00:29:59.478 "namespaces": [ 00:29:59.478 { 00:29:59.478 "nsid": 1, 00:29:59.478 "bdev_name": "Nvme0n1", 00:29:59.478 "name": "Nvme0n1", 00:29:59.478 "nguid": "36344730526054870025384500000044", 00:29:59.478 "uuid": "36344730-5260-5487-0025-384500000044" 00:29:59.478 } 00:29:59.478 ] 00:29:59.478 } 00:29:59.478 ] 00:29:59.478 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:59.478 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:59.478 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:59.739 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:59.739 21:06:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:59.739 rmmod nvme_tcp 00:29:59.739 rmmod nvme_fabrics 00:29:59.739 rmmod nvme_keyring 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1781900 ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1781900 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1781900 ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1781900 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.739 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781900 00:29:59.999 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:59.999 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:59.999 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781900' 00:29:59.999 killing process with pid 1781900 00:29:59.999 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1781900 00:29:59.999 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1781900 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.259 21:06:03 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.259 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:00.259 21:06:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.173 21:06:05 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:02.173 00:30:02.173 real 0m12.585s 00:30:02.173 user 0m9.665s 00:30:02.173 sys 0m6.077s 00:30:02.173 21:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.173 21:06:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:02.173 ************************************ 00:30:02.173 END TEST nvmf_identify_passthru 00:30:02.173 ************************************ 00:30:02.173 21:06:06 -- common/autotest_common.sh@1142 -- # return 0 00:30:02.173 21:06:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:02.173 21:06:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:02.173 21:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.173 21:06:06 -- common/autotest_common.sh@10 -- # set +x 00:30:02.435 ************************************ 00:30:02.435 START TEST nvmf_dif 00:30:02.435 ************************************ 00:30:02.435 21:06:06 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:02.435 * Looking for test storage... 00:30:02.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.435 21:06:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.435 21:06:06 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.436 21:06:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.436 21:06:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.436 21:06:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.436 21:06:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.436 21:06:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.436 21:06:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.436 21:06:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:02.436 21:06:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:02.436 21:06:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:02.436 21:06:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:02.436 21:06:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:02.436 21:06:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:02.436 21:06:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.436 21:06:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.436 21:06:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:02.436 21:06:06 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:02.436 21:06:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:10.603 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:10.603 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:10.603 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:10.603 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:30:10.603 00:30:10.603 --- 10.0.0.2 ping statistics --- 00:30:10.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.603 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:30:10.603 00:30:10.603 --- 10.0.0.1 ping statistics --- 00:30:10.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.603 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:10.603 21:06:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:12.561 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:12.561 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:12.561 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.821 21:06:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:12.821 21:06:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1788222 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1788222 00:30:12.821 21:06:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1788222 ']' 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.821 21:06:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.082 [2024-07-15 21:06:16.759089] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:30:13.082 [2024-07-15 21:06:16.759150] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.082 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.082 [2024-07-15 21:06:16.829854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.082 [2024-07-15 21:06:16.903504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.082 [2024-07-15 21:06:16.903545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.082 [2024-07-15 21:06:16.903553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.082 [2024-07-15 21:06:16.903559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.082 [2024-07-15 21:06:16.903565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.082 [2024-07-15 21:06:16.903583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.652 21:06:17 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.652 21:06:17 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:13.653 21:06:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.653 21:06:17 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:13.653 21:06:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 21:06:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.913 21:06:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:13.913 21:06:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 [2024-07-15 21:06:17.570375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.913 21:06:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 ************************************ 00:30:13.913 START TEST fio_dif_1_default 00:30:13.913 ************************************ 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 bdev_null0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:13.913 [2024-07-15 21:06:17.658721] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.913 { 00:30:13.913 "params": { 00:30:13.913 "name": "Nvme$subsystem", 00:30:13.913 "trtype": "$TEST_TRANSPORT", 00:30:13.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.913 "adrfam": "ipv4", 00:30:13.913 "trsvcid": "$NVMF_PORT", 00:30:13.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.913 "hdgst": ${hdgst:-false}, 00:30:13.913 "ddgst": ${ddgst:-false} 00:30:13.913 }, 00:30:13.913 "method": "bdev_nvme_attach_controller" 00:30:13.913 } 00:30:13.913 EOF 00:30:13.913 )") 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.913 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.914 "params": { 00:30:13.914 "name": "Nvme0", 00:30:13.914 "trtype": "tcp", 00:30:13.914 "traddr": "10.0.0.2", 00:30:13.914 "adrfam": "ipv4", 00:30:13.914 "trsvcid": "4420", 00:30:13.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.914 "hdgst": false, 00:30:13.914 "ddgst": false 00:30:13.914 }, 00:30:13.914 "method": "bdev_nvme_attach_controller" 00:30:13.914 }' 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:13.914 21:06:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.494 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.494 fio-3.35 00:30:14.494 Starting 1 thread 00:30:14.494 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.727 00:30:26.727 filename0: (groupid=0, jobs=1): err= 0: pid=1788743: Mon Jul 15 21:06:28 2024 00:30:26.727 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:26.727 slat (nsec): min=2909, max=16264, avg=5572.17, stdev=673.10 00:30:26.727 clat (usec): min=41852, max=44814, avg=42000.83, stdev=196.45 00:30:26.727 lat (usec): min=41857, max=44825, avg=42006.40, stdev=196.41 00:30:26.727 clat percentiles (usec): 00:30:26.727 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:30:26.727 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:26.727 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:26.727 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:30:26.727 | 99.99th=[44827] 00:30:26.727 bw ( KiB/s): min= 352, max= 384, per=99.79%, avg=380.80, stdev= 9.85, samples=20 00:30:26.727 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:26.727 lat (msec) : 50=100.00% 00:30:26.727 cpu : usr=95.48%, sys=4.33%, ctx=12, majf=0, minf=226 00:30:26.727 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.727 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.727 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:26.727 00:30:26.727 Run status group 0 (all jobs): 00:30:26.727 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10042-10042msec 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 00:30:26.727 real 0m11.180s 00:30:26.727 user 0m27.635s 00:30:26.727 sys 0m0.739s 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 ************************************ 00:30:26.727 END TEST fio_dif_1_default 00:30:26.727 ************************************ 00:30:26.727 21:06:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:26.727 21:06:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:26.727 21:06:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:26.727 21:06:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 ************************************ 00:30:26.727 START TEST fio_dif_1_multi_subsystems 00:30:26.727 ************************************ 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 bdev_null0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 [2024-07-15 21:06:28.900298] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 bdev_null1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:26.727 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:26.728 { 00:30:26.728 "params": { 00:30:26.728 "name": "Nvme$subsystem", 00:30:26.728 "trtype": "$TEST_TRANSPORT", 00:30:26.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.728 "adrfam": "ipv4", 00:30:26.728 "trsvcid": "$NVMF_PORT", 00:30:26.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.728 "hdgst": ${hdgst:-false}, 00:30:26.728 "ddgst": ${ddgst:-false} 00:30:26.728 }, 00:30:26.728 "method": "bdev_nvme_attach_controller" 00:30:26.728 } 00:30:26.728 EOF 00:30:26.728 )") 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:26.728 { 00:30:26.728 "params": { 00:30:26.728 "name": "Nvme$subsystem", 00:30:26.728 "trtype": "$TEST_TRANSPORT", 00:30:26.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.728 "adrfam": "ipv4", 00:30:26.728 "trsvcid": "$NVMF_PORT", 00:30:26.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.728 "hdgst": ${hdgst:-false}, 00:30:26.728 "ddgst": ${ddgst:-false} 00:30:26.728 }, 00:30:26.728 "method": "bdev_nvme_attach_controller" 00:30:26.728 } 00:30:26.728 EOF 00:30:26.728 )") 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:26.728 "params": { 00:30:26.728 "name": "Nvme0", 00:30:26.728 "trtype": "tcp", 00:30:26.728 "traddr": "10.0.0.2", 00:30:26.728 "adrfam": "ipv4", 00:30:26.728 "trsvcid": "4420", 00:30:26.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.728 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:26.728 "hdgst": false, 00:30:26.728 "ddgst": false 00:30:26.728 }, 00:30:26.728 "method": "bdev_nvme_attach_controller" 00:30:26.728 },{ 00:30:26.728 "params": { 00:30:26.728 "name": "Nvme1", 00:30:26.728 "trtype": "tcp", 00:30:26.728 "traddr": "10.0.0.2", 00:30:26.728 "adrfam": "ipv4", 00:30:26.728 "trsvcid": "4420", 00:30:26.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.728 "hdgst": false, 00:30:26.728 "ddgst": false 00:30:26.728 }, 00:30:26.728 "method": "bdev_nvme_attach_controller" 00:30:26.728 }' 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:26.728 21:06:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.728 21:06:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.728 21:06:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.728 21:06:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:26.728 21:06:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.728 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:26.728 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:26.728 fio-3.35 00:30:26.728 Starting 2 threads 00:30:26.728 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.748 00:30:36.748 filename0: (groupid=0, jobs=1): err= 0: pid=1791065: Mon Jul 15 21:06:40 2024 00:30:36.748 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10028msec) 00:30:36.748 slat (nsec): min=5407, max=79260, avg=6463.84, stdev=2279.07 00:30:36.748 clat (usec): min=1061, max=43087, avg=21545.79, stdev=20113.95 00:30:36.748 lat (usec): min=1066, max=43123, avg=21552.25, stdev=20113.84 00:30:36.748 clat percentiles (usec): 00:30:36.748 | 1.00th=[ 1123], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1401], 00:30:36.748 | 30.00th=[ 1418], 40.00th=[ 1434], 50.00th=[41157], 60.00th=[41681], 00:30:36.748 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:36.748 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:36.748 | 99.99th=[43254] 00:30:36.748 bw ( KiB/s): min= 704, max= 768, per=66.15%, avg=742.40, stdev=32.17, samples=20 00:30:36.748 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:30:36.748 lat (msec) : 2=49.89%, 50=50.11% 00:30:36.748 cpu : usr=96.72%, sys=3.04%, ctx=23, majf=0, minf=263 00:30:36.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.748 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.748 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:36.748 filename1: (groupid=0, jobs=1): err= 0: pid=1791066: Mon Jul 15 21:06:40 2024 00:30:36.748 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:36.748 slat (nsec): min=5409, max=34014, avg=6918.95, stdev=1806.32 00:30:36.748 clat (usec): min=41681, max=43233, avg=41994.37, stdev=140.13 00:30:36.748 lat (usec): min=41687, max=43267, avg=42001.29, stdev=140.50 00:30:36.748 clat percentiles (usec): 00:30:36.748 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:36.749 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:36.749 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:36.749 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:30:36.749 | 99.99th=[43254] 00:30:36.749 bw ( KiB/s): min= 352, max= 384, per=33.88%, avg=380.80, stdev= 9.85, samples=20 00:30:36.749 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:36.749 lat (msec) : 50=100.00% 00:30:36.749 cpu : usr=96.60%, sys=3.16%, ctx=13, majf=0, minf=41 00:30:36.749 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.749 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.749 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:36.749 00:30:36.749 Run status group 0 (all jobs): 00:30:36.749 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-742KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10028-10042msec 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 00:30:36.749 real 0m11.503s 00:30:36.749 user 0m35.765s 00:30:36.749 sys 0m0.984s 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 ************************************ 00:30:36.749 END TEST fio_dif_1_multi_subsystems 00:30:36.749 ************************************ 00:30:36.749 21:06:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:36.749 21:06:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:36.749 21:06:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:36.749 21:06:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 ************************************ 00:30:36.749 START TEST fio_dif_rand_params 00:30:36.749 ************************************ 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 bdev_null0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.749 [2024-07-15 21:06:40.484653] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:36.749 { 00:30:36.749 "params": { 00:30:36.749 "name": "Nvme$subsystem", 00:30:36.749 "trtype": "$TEST_TRANSPORT", 00:30:36.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.749 "adrfam": "ipv4", 00:30:36.749 "trsvcid": "$NVMF_PORT", 00:30:36.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.749 "hdgst": ${hdgst:-false}, 00:30:36.749 "ddgst": ${ddgst:-false} 00:30:36.749 }, 00:30:36.749 "method": "bdev_nvme_attach_controller" 00:30:36.749 } 00:30:36.749 EOF 00:30:36.749 )") 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:36.749 "params": { 00:30:36.749 "name": "Nvme0", 00:30:36.749 "trtype": "tcp", 00:30:36.749 "traddr": "10.0.0.2", 00:30:36.749 "adrfam": "ipv4", 00:30:36.749 "trsvcid": "4420", 00:30:36.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:36.749 "hdgst": false, 00:30:36.749 "ddgst": false 00:30:36.749 }, 00:30:36.749 "method": "bdev_nvme_attach_controller" 00:30:36.749 }' 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:36.749 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.750 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.750 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.750 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:36.750 21:06:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:37.010 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:37.010 ... 00:30:37.010 fio-3.35 00:30:37.010 Starting 3 threads 00:30:37.270 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.558 00:30:42.558 filename0: (groupid=0, jobs=1): err= 0: pid=1793459: Mon Jul 15 21:06:46 2024 00:30:42.558 read: IOPS=127, BW=15.9MiB/s (16.7MB/s)(79.8MiB/5007msec) 00:30:42.558 slat (nsec): min=5403, max=36670, avg=7120.46, stdev=2075.87 00:30:42.558 clat (usec): min=5708, max=95952, avg=23523.70, stdev=21098.39 00:30:42.558 lat (usec): min=5714, max=95958, avg=23530.82, stdev=21098.64 00:30:42.558 clat percentiles (usec): 00:30:42.558 | 1.00th=[ 6652], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9503], 00:30:42.558 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12911], 00:30:42.558 | 70.00th=[15008], 80.00th=[52167], 90.00th=[53740], 95.00th=[54789], 00:30:42.558 | 99.00th=[93848], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:30:42.558 | 99.99th=[95945] 00:30:42.558 bw ( KiB/s): min= 8448, max=24320, per=28.53%, avg=16281.60, stdev=5510.71, samples=10 00:30:42.558 iops : min= 66, max= 190, avg=127.20, stdev=43.05, samples=10 00:30:42.558 lat (msec) : 10=27.90%, 20=43.42%, 50=1.25%, 100=27.43% 00:30:42.558 cpu : usr=96.14%, sys=3.58%, ctx=14, majf=0, minf=80 00:30:42.558 IO depths : 1=5.5%, 2=94.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.558 filename0: (groupid=0, jobs=1): err= 0: pid=1793460: Mon Jul 15 21:06:46 2024 00:30:42.558 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(124MiB/5006msec) 00:30:42.558 slat (usec): min=5, max=151, avg= 7.74, stdev= 4.90 00:30:42.558 clat (usec): min=5672, max=91226, avg=15115.83, stdev=14850.66 00:30:42.558 lat (usec): min=5678, max=91233, avg=15123.57, stdev=14850.66 00:30:42.558 clat percentiles (usec): 00:30:42.558 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7635], 00:30:42.558 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:30:42.558 | 70.00th=[10814], 80.00th=[11994], 90.00th=[50070], 95.00th=[51643], 00:30:42.558 | 99.00th=[53740], 99.50th=[53740], 99.90th=[91751], 99.95th=[91751], 00:30:42.558 | 99.99th=[91751] 00:30:42.558 bw ( KiB/s): min=17664, max=34560, per=44.45%, avg=25369.60, stdev=5822.02, samples=10 00:30:42.558 iops : min= 138, max= 270, avg=198.20, stdev=45.48, samples=10 00:30:42.558 lat (msec) : 10=59.17%, 20=26.71%, 50=3.93%, 100=10.18% 00:30:42.558 cpu : usr=95.18%, sys=4.46%, ctx=15, majf=0, minf=183 00:30:42.558 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.558 filename0: (groupid=0, jobs=1): err= 0: pid=1793461: Mon Jul 15 21:06:46 2024 00:30:42.558 read: IOPS=122, BW=15.4MiB/s (16.1MB/s)(77.5MiB/5046msec) 00:30:42.558 slat (nsec): min=5414, max=31985, avg=7193.82, stdev=1705.77 00:30:42.558 clat (usec): min=6023, max=96803, avg=24331.13, stdev=21484.04 00:30:42.558 lat (usec): min=6029, max=96814, avg=24338.33, stdev=21484.19 00:30:42.558 clat percentiles (usec): 00:30:42.558 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9765], 00:30:42.558 | 30.00th=[10421], 40.00th=[11207], 50.00th=[12256], 60.00th=[13304], 00:30:42.558 | 70.00th=[16188], 80.00th=[52167], 90.00th=[53740], 95.00th=[54789], 00:30:42.558 | 99.00th=[93848], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:30:42.558 | 99.99th=[96994] 00:30:42.558 bw ( KiB/s): min=11520, max=21504, per=27.72%, avg=15820.80, stdev=2875.62, samples=10 00:30:42.558 iops : min= 90, max= 168, avg=123.60, stdev=22.47, samples=10 00:30:42.558 lat (msec) : 10=24.19%, 20=46.29%, 50=0.97%, 100=28.55% 00:30:42.558 cpu : usr=96.59%, sys=3.11%, ctx=7, majf=0, minf=117 00:30:42.558 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.558 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.558 00:30:42.558 Run status group 0 (all jobs): 00:30:42.558 READ: bw=55.7MiB/s (58.4MB/s), 15.4MiB/s-24.8MiB/s (16.1MB/s-26.0MB/s), io=281MiB (295MB), run=5006-5046msec 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.820 bdev_null0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.820 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 [2024-07-15 21:06:46.556699] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 bdev_null1 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 bdev_null2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.821 { 00:30:42.821 "params": { 00:30:42.821 "name": "Nvme$subsystem", 00:30:42.821 "trtype": "$TEST_TRANSPORT", 00:30:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.821 "adrfam": "ipv4", 00:30:42.821 "trsvcid": "$NVMF_PORT", 00:30:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.821 "hdgst": ${hdgst:-false}, 00:30:42.821 "ddgst": ${ddgst:-false} 00:30:42.821 }, 00:30:42.821 "method": "bdev_nvme_attach_controller" 00:30:42.821 } 00:30:42.821 EOF 00:30:42.821 )") 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.821 { 00:30:42.821 "params": { 00:30:42.821 "name": "Nvme$subsystem", 00:30:42.821 "trtype": "$TEST_TRANSPORT", 00:30:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.821 "adrfam": "ipv4", 00:30:42.821 "trsvcid": "$NVMF_PORT", 00:30:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.821 "hdgst": ${hdgst:-false}, 00:30:42.821 "ddgst": ${ddgst:-false} 00:30:42.821 }, 00:30:42.821 "method": "bdev_nvme_attach_controller" 00:30:42.821 } 00:30:42.821 EOF 00:30:42.821 )") 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.821 { 00:30:42.821 "params": { 00:30:42.821 "name": "Nvme$subsystem", 00:30:42.821 "trtype": "$TEST_TRANSPORT", 00:30:42.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.821 "adrfam": "ipv4", 00:30:42.821 "trsvcid": "$NVMF_PORT", 00:30:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.821 "hdgst": ${hdgst:-false}, 00:30:42.821 "ddgst": ${ddgst:-false} 00:30:42.821 }, 00:30:42.821 "method": "bdev_nvme_attach_controller" 00:30:42.821 } 00:30:42.821 EOF 00:30:42.821 )") 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:42.821 21:06:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.821 "params": { 00:30:42.821 "name": "Nvme0", 00:30:42.821 "trtype": "tcp", 00:30:42.821 "traddr": "10.0.0.2", 00:30:42.821 "adrfam": "ipv4", 00:30:42.821 "trsvcid": "4420", 00:30:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.821 "hdgst": false, 00:30:42.821 "ddgst": false 00:30:42.821 }, 00:30:42.821 "method": "bdev_nvme_attach_controller" 00:30:42.821 },{ 00:30:42.821 "params": { 00:30:42.822 "name": "Nvme1", 00:30:42.822 "trtype": "tcp", 00:30:42.822 "traddr": "10.0.0.2", 00:30:42.822 "adrfam": "ipv4", 00:30:42.822 "trsvcid": "4420", 00:30:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.822 "hdgst": false, 00:30:42.822 "ddgst": false 00:30:42.822 }, 00:30:42.822 "method": "bdev_nvme_attach_controller" 00:30:42.822 },{ 00:30:42.822 "params": { 00:30:42.822 "name": "Nvme2", 00:30:42.822 "trtype": "tcp", 00:30:42.822 "traddr": "10.0.0.2", 00:30:42.822 "adrfam": "ipv4", 00:30:42.822 "trsvcid": "4420", 00:30:42.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:42.822 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:42.822 "hdgst": false, 00:30:42.822 "ddgst": false 00:30:42.822 }, 00:30:42.822 "method": "bdev_nvme_attach_controller" 00:30:42.822 }' 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:42.822 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.106 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.106 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.106 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.106 21:06:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.371 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:43.371 ... 00:30:43.371 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:43.371 ... 00:30:43.371 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:43.371 ... 00:30:43.371 fio-3.35 00:30:43.371 Starting 24 threads 00:30:43.371 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.662 00:30:55.662 filename0: (groupid=0, jobs=1): err= 0: pid=1794959: Mon Jul 15 21:06:58 2024 00:30:55.662 read: IOPS=511, BW=2045KiB/s (2095kB/s)(20.0MiB/10034msec) 00:30:55.663 slat (usec): min=5, max=134, avg=16.92, stdev=16.19 00:30:55.663 clat (usec): min=2477, max=56050, avg=31101.57, stdev=5888.74 00:30:55.663 lat (usec): min=2514, max=56078, avg=31118.49, stdev=5889.77 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[ 4948], 5.00th=[19792], 10.00th=[26084], 20.00th=[30802], 00:30:55.663 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[36963], 00:30:55.663 | 99.00th=[48497], 99.50th=[50594], 99.90th=[53740], 99.95th=[55837], 00:30:55.663 | 99.99th=[55837] 00:30:55.663 bw ( KiB/s): min= 1916, max= 2736, per=4.32%, avg=2048.90, stdev=178.47, samples=20 00:30:55.663 iops : min= 479, max= 684, avg=512.15, stdev=44.62, samples=20 00:30:55.663 lat (msec) : 4=0.70%, 10=1.17%, 20=3.16%, 50=94.45%, 100=0.53% 00:30:55.663 cpu : usr=98.53%, sys=1.09%, ctx=39, majf=0, minf=107 00:30:55.663 IO depths : 1=4.1%, 2=8.3%, 4=19.4%, 8=59.1%, 16=9.1%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=5131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794960: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10007msec) 00:30:55.663 slat (usec): min=5, max=130, avg=26.35, stdev=20.22 00:30:55.663 clat (usec): min=9789, max=59785, avg=32070.20, stdev=4247.83 00:30:55.663 lat (usec): min=9795, max=59800, avg=32096.55, stdev=4248.13 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[19006], 5.00th=[26608], 10.00th=[30540], 20.00th=[31065], 00:30:55.663 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[37487], 00:30:55.663 | 99.00th=[49546], 99.50th=[55837], 99.90th=[59507], 99.95th=[60031], 00:30:55.663 | 99.99th=[60031] 00:30:55.663 bw ( KiB/s): min= 1776, max= 2096, per=4.17%, avg=1978.21, stdev=78.83, samples=19 00:30:55.663 iops : min= 444, max= 524, avg=494.47, stdev=19.65, samples=19 00:30:55.663 lat (msec) : 10=0.24%, 20=1.01%, 50=97.88%, 100=0.87% 00:30:55.663 cpu : usr=98.98%, sys=0.67%, ctx=21, majf=0, minf=61 00:30:55.663 IO depths : 1=3.7%, 2=8.4%, 4=20.3%, 8=58.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=4958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794961: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10017msec) 00:30:55.663 slat (usec): min=5, max=113, avg=24.10, stdev=20.41 00:30:55.663 clat (usec): min=12567, max=56505, avg=31674.95, stdev=4568.03 00:30:55.663 lat (usec): min=12575, max=56525, avg=31699.05, stdev=4568.28 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[17957], 5.00th=[22938], 10.00th=[26870], 20.00th=[30802], 00:30:55.663 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[39060], 00:30:55.663 | 99.00th=[48497], 99.50th=[50594], 99.90th=[55313], 99.95th=[55837], 00:30:55.663 | 99.99th=[56361] 00:30:55.663 bw ( KiB/s): min= 1900, max= 2192, per=4.23%, avg=2007.00, stdev=78.93, samples=19 00:30:55.663 iops : min= 475, max= 548, avg=501.63, stdev=19.67, samples=19 00:30:55.663 lat (msec) : 20=1.91%, 50=97.27%, 100=0.82% 00:30:55.663 cpu : usr=98.83%, sys=0.77%, ctx=85, majf=0, minf=77 00:30:55.663 IO depths : 1=1.6%, 2=4.6%, 4=14.1%, 8=67.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=91.9%, 8=3.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=5026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794962: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=511, BW=2048KiB/s (2097kB/s)(20.0MiB/10025msec) 00:30:55.663 slat (usec): min=5, max=131, avg=13.25, stdev=11.78 00:30:55.663 clat (usec): min=4109, max=60235, avg=31158.60, stdev=4729.34 00:30:55.663 lat (usec): min=4131, max=60243, avg=31171.85, stdev=4729.66 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[10028], 5.00th=[21627], 10.00th=[27395], 20.00th=[31065], 00:30:55.663 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34341], 00:30:55.663 | 99.00th=[42730], 99.50th=[47449], 99.90th=[55837], 99.95th=[60031], 00:30:55.663 | 99.99th=[60031] 00:30:55.663 bw ( KiB/s): min= 1916, max= 2432, per=4.31%, avg=2045.95, stdev=131.03, samples=20 00:30:55.663 iops : min= 479, max= 608, avg=511.45, stdev=32.76, samples=20 00:30:55.663 lat (msec) : 10=0.97%, 20=2.63%, 50=96.16%, 100=0.23% 00:30:55.663 cpu : usr=98.69%, sys=0.92%, ctx=36, majf=0, minf=61 00:30:55.663 IO depths : 1=3.3%, 2=7.8%, 4=20.7%, 8=58.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=5132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794964: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=506, BW=2027KiB/s (2076kB/s)(19.8MiB/10021msec) 00:30:55.663 slat (nsec): min=5414, max=93441, avg=9927.38, stdev=6714.37 00:30:55.663 clat (usec): min=7408, max=56201, avg=31489.42, stdev=3277.53 00:30:55.663 lat (usec): min=7416, max=56210, avg=31499.35, stdev=3277.88 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[16057], 5.00th=[28181], 10.00th=[30540], 20.00th=[31065], 00:30:55.663 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:30:55.663 | 99.00th=[36439], 99.50th=[38536], 99.90th=[56361], 99.95th=[56361], 00:30:55.663 | 99.99th=[56361] 00:30:55.663 bw ( KiB/s): min= 1916, max= 2176, per=4.27%, avg=2023.85, stdev=86.46, samples=20 00:30:55.663 iops : min= 479, max= 544, avg=505.85, stdev=21.53, samples=20 00:30:55.663 lat (msec) : 10=0.51%, 20=1.69%, 50=97.60%, 100=0.20% 00:30:55.663 cpu : usr=99.19%, sys=0.48%, ctx=14, majf=0, minf=70 00:30:55.663 IO depths : 1=5.7%, 2=11.7%, 4=24.5%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794965: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=518, BW=2074KiB/s (2124kB/s)(20.3MiB/10022msec) 00:30:55.663 slat (usec): min=5, max=115, avg=16.50, stdev=16.45 00:30:55.663 clat (usec): min=6372, max=59582, avg=30725.85, stdev=5373.92 00:30:55.663 lat (usec): min=6387, max=59588, avg=30742.35, stdev=5375.83 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[12125], 5.00th=[19530], 10.00th=[23462], 20.00th=[30278], 00:30:55.663 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33424], 95.00th=[36963], 00:30:55.663 | 99.00th=[47449], 99.50th=[52167], 99.90th=[56361], 99.95th=[59507], 00:30:55.663 | 99.99th=[59507] 00:30:55.663 bw ( KiB/s): min= 1916, max= 2368, per=4.37%, avg=2071.65, stdev=124.96, samples=20 00:30:55.663 iops : min= 479, max= 592, avg=517.80, stdev=31.20, samples=20 00:30:55.663 lat (msec) : 10=0.54%, 20=4.79%, 50=94.11%, 100=0.56% 00:30:55.663 cpu : usr=97.08%, sys=1.78%, ctx=80, majf=0, minf=41 00:30:55.663 IO depths : 1=3.5%, 2=7.4%, 4=17.8%, 8=61.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=92.7%, 8=2.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=5196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794966: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10019msec) 00:30:55.663 slat (usec): min=5, max=113, avg=21.32, stdev=17.83 00:30:55.663 clat (usec): min=16866, max=59875, avg=32084.82, stdev=4513.57 00:30:55.663 lat (usec): min=16872, max=59881, avg=32106.14, stdev=4513.36 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[19792], 5.00th=[24249], 10.00th=[28443], 20.00th=[31065], 00:30:55.663 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.663 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34341], 95.00th=[40633], 00:30:55.663 | 99.00th=[49546], 99.50th=[53216], 99.90th=[58983], 99.95th=[58983], 00:30:55.663 | 99.99th=[60031] 00:30:55.663 bw ( KiB/s): min= 1900, max= 2096, per=4.18%, avg=1983.74, stdev=61.32, samples=19 00:30:55.663 iops : min= 475, max= 524, avg=495.89, stdev=15.29, samples=19 00:30:55.663 lat (msec) : 20=1.11%, 50=98.11%, 100=0.78% 00:30:55.663 cpu : usr=98.90%, sys=0.75%, ctx=36, majf=0, minf=68 00:30:55.663 IO depths : 1=2.9%, 2=7.0%, 4=18.5%, 8=61.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:55.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.663 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.663 filename0: (groupid=0, jobs=1): err= 0: pid=1794967: Mon Jul 15 21:06:58 2024 00:30:55.663 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10013msec) 00:30:55.663 slat (usec): min=5, max=131, avg=20.67, stdev=18.79 00:30:55.663 clat (usec): min=8004, max=57694, avg=32882.80, stdev=5841.66 00:30:55.663 lat (usec): min=8014, max=57732, avg=32903.46, stdev=5841.61 00:30:55.663 clat percentiles (usec): 00:30:55.663 | 1.00th=[17171], 5.00th=[24773], 10.00th=[29492], 20.00th=[31065], 00:30:55.663 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.663 | 70.00th=[32637], 80.00th=[33817], 90.00th=[39060], 95.00th=[45351], 00:30:55.663 | 99.00th=[55313], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:30:55.663 | 99.99th=[57934] 00:30:55.663 bw ( KiB/s): min= 1824, max= 2064, per=4.08%, avg=1934.00, stdev=73.18, samples=19 00:30:55.663 iops : min= 456, max= 516, avg=483.42, stdev=18.18, samples=19 00:30:55.663 lat (msec) : 10=0.37%, 20=1.32%, 50=95.82%, 100=2.49% 00:30:55.663 cpu : usr=99.02%, sys=0.64%, ctx=17, majf=0, minf=62 00:30:55.664 IO depths : 1=0.9%, 2=3.6%, 4=15.0%, 8=67.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=92.5%, 8=2.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794969: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10017msec) 00:30:55.664 slat (usec): min=5, max=128, avg=21.91, stdev=19.57 00:30:55.664 clat (usec): min=12131, max=56529, avg=32105.31, stdev=3479.59 00:30:55.664 lat (usec): min=12138, max=56551, avg=32127.23, stdev=3479.63 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[21365], 5.00th=[28967], 10.00th=[30540], 20.00th=[31327], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[35390], 00:30:55.664 | 99.00th=[49021], 99.50th=[51643], 99.90th=[54789], 99.95th=[56361], 00:30:55.664 | 99.99th=[56361] 00:30:55.664 bw ( KiB/s): min= 1800, max= 2048, per=4.16%, avg=1974.68, stdev=73.00, samples=19 00:30:55.664 iops : min= 450, max= 512, avg=493.63, stdev=18.21, samples=19 00:30:55.664 lat (msec) : 20=0.93%, 50=98.49%, 100=0.58% 00:30:55.664 cpu : usr=98.89%, sys=0.77%, ctx=20, majf=0, minf=52 00:30:55.664 IO depths : 1=4.4%, 2=9.3%, 4=21.2%, 8=56.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794970: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10010msec) 00:30:55.664 slat (usec): min=5, max=124, avg=26.14, stdev=21.45 00:30:55.664 clat (usec): min=11992, max=60103, avg=32381.52, stdev=5110.96 00:30:55.664 lat (usec): min=12000, max=60111, avg=32407.65, stdev=5110.19 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[17957], 5.00th=[24511], 10.00th=[29754], 20.00th=[30802], 00:30:55.664 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[33162], 90.00th=[35914], 95.00th=[41681], 00:30:55.664 | 99.00th=[53740], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:30:55.664 | 99.99th=[60031] 00:30:55.664 bw ( KiB/s): min= 1760, max= 2096, per=4.13%, avg=1956.37, stdev=88.11, samples=19 00:30:55.664 iops : min= 440, max= 524, avg=489.05, stdev=22.00, samples=19 00:30:55.664 lat (msec) : 20=2.12%, 50=96.24%, 100=1.65% 00:30:55.664 cpu : usr=99.03%, sys=0.59%, ctx=47, majf=0, minf=73 00:30:55.664 IO depths : 1=2.6%, 2=5.7%, 4=16.7%, 8=64.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=92.3%, 8=2.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794971: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.1MiB/10005msec) 00:30:55.664 slat (usec): min=5, max=116, avg=20.31, stdev=17.79 00:30:55.664 clat (usec): min=6277, max=64348, avg=32529.44, stdev=5324.72 00:30:55.664 lat (usec): min=6283, max=64364, avg=32549.75, stdev=5324.48 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[15139], 5.00th=[26084], 10.00th=[30540], 20.00th=[31327], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.664 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[42206], 00:30:55.664 | 99.00th=[54789], 99.50th=[56886], 99.90th=[64226], 99.95th=[64226], 00:30:55.664 | 99.99th=[64226] 00:30:55.664 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1948.16, stdev=68.61, samples=19 00:30:55.664 iops : min= 448, max= 512, avg=487.00, stdev=17.22, samples=19 00:30:55.664 lat (msec) : 10=0.43%, 20=2.20%, 50=95.59%, 100=1.78% 00:30:55.664 cpu : usr=99.00%, sys=0.64%, ctx=23, majf=0, minf=73 00:30:55.664 IO depths : 1=1.2%, 2=3.1%, 4=11.6%, 8=70.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=91.3%, 8=4.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794973: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=487, BW=1951KiB/s (1997kB/s)(19.1MiB/10003msec) 00:30:55.664 slat (usec): min=5, max=120, avg=24.97, stdev=21.61 00:30:55.664 clat (usec): min=7579, max=84202, avg=32646.42, stdev=4946.12 00:30:55.664 lat (usec): min=7587, max=84219, avg=32671.38, stdev=4945.90 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[19006], 5.00th=[27657], 10.00th=[30540], 20.00th=[31327], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.664 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35390], 95.00th=[41157], 00:30:55.664 | 99.00th=[53740], 99.50th=[57410], 99.90th=[63701], 99.95th=[63701], 00:30:55.664 | 99.99th=[84411] 00:30:55.664 bw ( KiB/s): min= 1744, max= 2096, per=4.10%, avg=1945.63, stdev=84.93, samples=19 00:30:55.664 iops : min= 436, max= 524, avg=486.37, stdev=21.23, samples=19 00:30:55.664 lat (msec) : 10=0.18%, 20=1.64%, 50=96.82%, 100=1.35% 00:30:55.664 cpu : usr=97.57%, sys=1.42%, ctx=90, majf=0, minf=64 00:30:55.664 IO depths : 1=1.5%, 2=3.3%, 4=9.7%, 8=72.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=90.9%, 8=5.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794974: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10021msec) 00:30:55.664 slat (usec): min=5, max=136, avg=13.94, stdev=14.39 00:30:55.664 clat (usec): min=20669, max=46658, avg=31865.30, stdev=1923.93 00:30:55.664 lat (usec): min=20678, max=46684, avg=31879.23, stdev=1923.71 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[21890], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:30:55.664 | 99.00th=[36439], 99.50th=[38011], 99.90th=[46400], 99.95th=[46400], 00:30:55.664 | 99.99th=[46400] 00:30:55.664 bw ( KiB/s): min= 1916, max= 2096, per=4.21%, avg=1998.30, stdev=67.32, samples=20 00:30:55.664 iops : min= 479, max= 524, avg=499.50, stdev=16.78, samples=20 00:30:55.664 lat (msec) : 50=100.00% 00:30:55.664 cpu : usr=99.01%, sys=0.63%, ctx=26, majf=0, minf=65 00:30:55.664 IO depths : 1=6.0%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=5014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794975: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10012msec) 00:30:55.664 slat (usec): min=5, max=117, avg=19.39, stdev=18.49 00:30:55.664 clat (usec): min=11914, max=51706, avg=31907.62, stdev=3245.00 00:30:55.664 lat (usec): min=11923, max=51712, avg=31927.01, stdev=3245.20 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[17957], 5.00th=[28443], 10.00th=[30540], 20.00th=[31065], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34866], 00:30:55.664 | 99.00th=[44303], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:30:55.664 | 99.99th=[51643] 00:30:55.664 bw ( KiB/s): min= 1868, max= 2128, per=4.21%, avg=1995.63, stdev=74.40, samples=19 00:30:55.664 iops : min= 467, max= 532, avg=498.79, stdev=18.62, samples=19 00:30:55.664 lat (msec) : 20=1.24%, 50=98.60%, 100=0.16% 00:30:55.664 cpu : usr=99.03%, sys=0.63%, ctx=21, majf=0, minf=79 00:30:55.664 IO depths : 1=3.3%, 2=7.2%, 4=18.9%, 8=61.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794976: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10003msec) 00:30:55.664 slat (usec): min=5, max=136, avg=24.53, stdev=20.59 00:30:55.664 clat (usec): min=7042, max=56303, avg=32162.76, stdev=3869.83 00:30:55.664 lat (usec): min=7048, max=56320, avg=32187.28, stdev=3869.31 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[16581], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[36439], 00:30:55.664 | 99.00th=[50594], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:30:55.664 | 99.99th=[56361] 00:30:55.664 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1969.21, stdev=74.31, samples=19 00:30:55.664 iops : min= 448, max= 512, avg=492.26, stdev=18.59, samples=19 00:30:55.664 lat (msec) : 10=0.08%, 20=1.31%, 50=97.58%, 100=1.03% 00:30:55.664 cpu : usr=98.70%, sys=0.90%, ctx=54, majf=0, minf=83 00:30:55.664 IO depths : 1=1.1%, 2=3.8%, 4=13.2%, 8=67.8%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:55.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 complete : 0=0.0%, 4=92.2%, 8=4.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.664 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.664 filename1: (groupid=0, jobs=1): err= 0: pid=1794977: Mon Jul 15 21:06:58 2024 00:30:55.664 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10020msec) 00:30:55.664 slat (usec): min=5, max=117, avg=24.34, stdev=18.66 00:30:55.664 clat (usec): min=15893, max=58600, avg=32521.63, stdev=4193.57 00:30:55.664 lat (usec): min=15901, max=58633, avg=32545.97, stdev=4193.05 00:30:55.664 clat percentiles (usec): 00:30:55.664 | 1.00th=[21365], 5.00th=[27919], 10.00th=[30540], 20.00th=[31065], 00:30:55.664 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.664 | 70.00th=[32375], 80.00th=[32900], 90.00th=[35390], 95.00th=[41157], 00:30:55.664 | 99.00th=[49546], 99.50th=[51119], 99.90th=[54264], 99.95th=[58459], 00:30:55.664 | 99.99th=[58459] 00:30:55.665 bw ( KiB/s): min= 1788, max= 2048, per=4.13%, avg=1956.53, stdev=87.68, samples=19 00:30:55.665 iops : min= 447, max= 512, avg=489.05, stdev=21.85, samples=19 00:30:55.665 lat (msec) : 20=0.67%, 50=98.41%, 100=0.92% 00:30:55.665 cpu : usr=98.90%, sys=0.75%, ctx=20, majf=0, minf=71 00:30:55.665 IO depths : 1=2.5%, 2=6.0%, 4=17.0%, 8=63.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=92.5%, 8=2.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=4900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794978: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10008msec) 00:30:55.665 slat (usec): min=5, max=117, avg=19.73, stdev=16.98 00:30:55.665 clat (usec): min=7470, max=60483, avg=32926.00, stdev=5533.56 00:30:55.665 lat (usec): min=7476, max=60489, avg=32945.74, stdev=5533.46 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[17695], 5.00th=[25035], 10.00th=[30016], 20.00th=[31327], 00:30:55.665 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.665 | 70.00th=[32637], 80.00th=[33424], 90.00th=[39584], 95.00th=[44827], 00:30:55.665 | 99.00th=[53216], 99.50th=[54789], 99.90th=[60556], 99.95th=[60556], 00:30:55.665 | 99.99th=[60556] 00:30:55.665 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1933.26, stdev=84.35, samples=19 00:30:55.665 iops : min= 448, max= 512, avg=483.32, stdev=21.09, samples=19 00:30:55.665 lat (msec) : 10=0.25%, 20=1.57%, 50=96.40%, 100=1.78% 00:30:55.665 cpu : usr=98.82%, sys=0.83%, ctx=25, majf=0, minf=57 00:30:55.665 IO depths : 1=2.1%, 2=4.7%, 4=14.6%, 8=66.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794979: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10003msec) 00:30:55.665 slat (usec): min=5, max=136, avg=28.91, stdev=20.62 00:30:55.665 clat (usec): min=3034, max=56318, avg=31876.77, stdev=2556.52 00:30:55.665 lat (usec): min=3040, max=56334, avg=31905.68, stdev=2556.55 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[28181], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:30:55.665 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.665 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:30:55.665 | 99.00th=[39584], 99.50th=[41681], 99.90th=[56361], 99.95th=[56361], 00:30:55.665 | 99.99th=[56361] 00:30:55.665 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1980.11, stdev=78.30, samples=19 00:30:55.665 iops : min= 448, max= 512, avg=494.95, stdev=19.57, samples=19 00:30:55.665 lat (msec) : 4=0.12%, 10=0.12%, 20=0.52%, 50=98.91%, 100=0.32% 00:30:55.665 cpu : usr=99.27%, sys=0.40%, ctx=16, majf=0, minf=80 00:30:55.665 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794980: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=507, BW=2031KiB/s (2080kB/s)(19.9MiB/10020msec) 00:30:55.665 slat (usec): min=5, max=118, avg=21.19, stdev=18.28 00:30:55.665 clat (usec): min=13186, max=59303, avg=31327.06, stdev=4138.21 00:30:55.665 lat (usec): min=13192, max=59345, avg=31348.25, stdev=4140.36 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[17957], 5.00th=[21890], 10.00th=[27919], 20.00th=[30802], 00:30:55.665 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:55.665 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:30:55.665 | 99.00th=[46400], 99.50th=[49546], 99.90th=[55837], 99.95th=[58983], 00:30:55.665 | 99.99th=[59507] 00:30:55.665 bw ( KiB/s): min= 1872, max= 2666, per=4.28%, avg=2031.95, stdev=179.38, samples=20 00:30:55.665 iops : min= 468, max= 666, avg=507.85, stdev=44.74, samples=20 00:30:55.665 lat (msec) : 20=2.28%, 50=97.37%, 100=0.35% 00:30:55.665 cpu : usr=98.70%, sys=0.93%, ctx=30, majf=0, minf=74 00:30:55.665 IO depths : 1=1.6%, 2=6.5%, 4=21.3%, 8=59.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794981: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10021msec) 00:30:55.665 slat (usec): min=5, max=136, avg=17.39, stdev=17.62 00:30:55.665 clat (usec): min=12832, max=59985, avg=31820.98, stdev=4190.51 00:30:55.665 lat (usec): min=12839, max=60006, avg=31838.37, stdev=4191.53 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[19792], 5.00th=[24773], 10.00th=[29230], 20.00th=[31065], 00:30:55.665 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.665 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[36963], 00:30:55.665 | 99.00th=[48497], 99.50th=[54789], 99.90th=[60031], 99.95th=[60031], 00:30:55.665 | 99.99th=[60031] 00:30:55.665 bw ( KiB/s): min= 1920, max= 2128, per=4.22%, avg=2001.45, stdev=65.01, samples=20 00:30:55.665 iops : min= 480, max= 532, avg=500.25, stdev=16.15, samples=20 00:30:55.665 lat (msec) : 20=1.32%, 50=97.73%, 100=0.96% 00:30:55.665 cpu : usr=98.88%, sys=0.76%, ctx=19, majf=0, minf=58 00:30:55.665 IO depths : 1=2.5%, 2=5.4%, 4=15.6%, 8=66.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=5019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794982: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=472, BW=1889KiB/s (1934kB/s)(18.5MiB/10003msec) 00:30:55.665 slat (usec): min=5, max=132, avg=17.73, stdev=17.11 00:30:55.665 clat (usec): min=8417, max=63764, avg=33780.61, stdev=6976.71 00:30:55.665 lat (usec): min=8426, max=63780, avg=33798.33, stdev=6976.26 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[16188], 5.00th=[23462], 10.00th=[28967], 20.00th=[31065], 00:30:55.665 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:30:55.665 | 70.00th=[33424], 80.00th=[37487], 90.00th=[43254], 95.00th=[48497], 00:30:55.665 | 99.00th=[54789], 99.50th=[56361], 99.90th=[63701], 99.95th=[63701], 00:30:55.665 | 99.99th=[63701] 00:30:55.665 bw ( KiB/s): min= 1715, max= 1992, per=3.95%, avg=1874.05, stdev=72.51, samples=19 00:30:55.665 iops : min= 428, max= 498, avg=468.47, stdev=18.22, samples=19 00:30:55.665 lat (msec) : 10=0.13%, 20=2.37%, 50=93.54%, 100=3.96% 00:30:55.665 cpu : usr=99.02%, sys=0.63%, ctx=15, majf=0, minf=97 00:30:55.665 IO depths : 1=0.7%, 2=1.4%, 4=9.0%, 8=74.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=90.7%, 8=5.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=4724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794983: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10015msec) 00:30:55.665 slat (usec): min=5, max=130, avg=22.58, stdev=16.73 00:30:55.665 clat (usec): min=10963, max=49252, avg=31916.72, stdev=2559.29 00:30:55.665 lat (usec): min=10971, max=49282, avg=31939.30, stdev=2559.96 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[20841], 5.00th=[29754], 10.00th=[30540], 20.00th=[31327], 00:30:55.665 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.665 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:30:55.665 | 99.00th=[40633], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:30:55.665 | 99.99th=[49021] 00:30:55.665 bw ( KiB/s): min= 1916, max= 2171, per=4.20%, avg=1993.11, stdev=76.89, samples=19 00:30:55.665 iops : min= 479, max= 542, avg=498.16, stdev=19.07, samples=19 00:30:55.665 lat (msec) : 20=0.72%, 50=99.28% 00:30:55.665 cpu : usr=98.56%, sys=1.03%, ctx=26, majf=0, minf=76 00:30:55.665 IO depths : 1=5.4%, 2=11.1%, 4=23.5%, 8=53.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794985: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=502, BW=2011KiB/s (2059kB/s)(19.8MiB/10071msec) 00:30:55.665 slat (usec): min=5, max=123, avg=12.65, stdev=11.57 00:30:55.665 clat (usec): min=5187, max=71684, avg=31707.40, stdev=5088.55 00:30:55.665 lat (usec): min=5199, max=71713, avg=31720.06, stdev=5089.68 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[12125], 5.00th=[23462], 10.00th=[28967], 20.00th=[31065], 00:30:55.665 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.665 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[38011], 00:30:55.665 | 99.00th=[46924], 99.50th=[56886], 99.90th=[71828], 99.95th=[71828], 00:30:55.665 | 99.99th=[71828] 00:30:55.665 bw ( KiB/s): min= 1916, max= 2304, per=4.25%, avg=2017.25, stdev=91.03, samples=20 00:30:55.665 iops : min= 479, max= 576, avg=504.20, stdev=22.75, samples=20 00:30:55.665 lat (msec) : 10=0.59%, 20=1.78%, 50=96.72%, 100=0.91% 00:30:55.665 cpu : usr=98.49%, sys=1.08%, ctx=27, majf=0, minf=80 00:30:55.665 IO depths : 1=3.4%, 2=6.9%, 4=18.3%, 8=61.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.665 issued rwts: total=5062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.665 filename2: (groupid=0, jobs=1): err= 0: pid=1794986: Mon Jul 15 21:06:58 2024 00:30:55.665 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10005msec) 00:30:55.665 slat (usec): min=6, max=204, avg=21.34, stdev=18.93 00:30:55.665 clat (usec): min=7308, max=65044, avg=33261.97, stdev=6098.45 00:30:55.665 lat (usec): min=7320, max=65060, avg=33283.32, stdev=6097.38 00:30:55.665 clat percentiles (usec): 00:30:55.665 | 1.00th=[16319], 5.00th=[25035], 10.00th=[30540], 20.00th=[31065], 00:30:55.665 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:30:55.665 | 70.00th=[32900], 80.00th=[34341], 90.00th=[41157], 95.00th=[45876], 00:30:55.666 | 99.00th=[55837], 99.50th=[56886], 99.90th=[64750], 99.95th=[64750], 00:30:55.666 | 99.99th=[65274] 00:30:55.666 bw ( KiB/s): min= 1760, max= 2048, per=4.02%, avg=1905.47, stdev=81.04, samples=19 00:30:55.666 iops : min= 440, max= 512, avg=476.37, stdev=20.26, samples=19 00:30:55.666 lat (msec) : 10=0.33%, 20=1.94%, 50=95.34%, 100=2.38% 00:30:55.666 cpu : usr=96.10%, sys=2.05%, ctx=104, majf=0, minf=47 00:30:55.666 IO depths : 1=2.2%, 2=4.3%, 4=13.0%, 8=68.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.666 complete : 0=0.0%, 4=91.4%, 8=4.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.666 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.666 00:30:55.666 Run status group 0 (all jobs): 00:30:55.666 READ: bw=46.3MiB/s (48.5MB/s), 1889KiB/s-2074KiB/s (1934kB/s-2124kB/s), io=466MiB (489MB), run=10003-10071msec 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 bdev_null0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 [2024-07-15 21:06:58.500116] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 bdev_null1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.666 { 00:30:55.666 "params": { 00:30:55.666 "name": "Nvme$subsystem", 00:30:55.666 "trtype": "$TEST_TRANSPORT", 00:30:55.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.666 "adrfam": "ipv4", 00:30:55.666 "trsvcid": "$NVMF_PORT", 00:30:55.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.666 "hdgst": ${hdgst:-false}, 00:30:55.666 "ddgst": ${ddgst:-false} 00:30:55.666 }, 00:30:55.666 "method": "bdev_nvme_attach_controller" 00:30:55.666 } 00:30:55.666 EOF 00:30:55.666 )") 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.666 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.666 { 00:30:55.666 "params": { 00:30:55.666 "name": "Nvme$subsystem", 00:30:55.667 "trtype": "$TEST_TRANSPORT", 00:30:55.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.667 "adrfam": "ipv4", 00:30:55.667 "trsvcid": "$NVMF_PORT", 00:30:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.667 "hdgst": ${hdgst:-false}, 00:30:55.667 "ddgst": ${ddgst:-false} 00:30:55.667 }, 00:30:55.667 "method": "bdev_nvme_attach_controller" 00:30:55.667 } 00:30:55.667 EOF 00:30:55.667 )") 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:55.667 "params": { 00:30:55.667 "name": "Nvme0", 00:30:55.667 "trtype": "tcp", 00:30:55.667 "traddr": "10.0.0.2", 00:30:55.667 "adrfam": "ipv4", 00:30:55.667 "trsvcid": "4420", 00:30:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.667 "hdgst": false, 00:30:55.667 "ddgst": false 00:30:55.667 }, 00:30:55.667 "method": "bdev_nvme_attach_controller" 00:30:55.667 },{ 00:30:55.667 "params": { 00:30:55.667 "name": "Nvme1", 00:30:55.667 "trtype": "tcp", 00:30:55.667 "traddr": "10.0.0.2", 00:30:55.667 "adrfam": "ipv4", 00:30:55.667 "trsvcid": "4420", 00:30:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.667 "hdgst": false, 00:30:55.667 "ddgst": false 00:30:55.667 }, 00:30:55.667 "method": "bdev_nvme_attach_controller" 00:30:55.667 }' 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:55.667 21:06:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.667 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:55.667 ... 00:30:55.667 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:55.667 ... 00:30:55.667 fio-3.35 00:30:55.667 Starting 4 threads 00:30:55.667 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.957 00:31:00.957 filename0: (groupid=0, jobs=1): err= 0: pid=1797175: Mon Jul 15 21:07:04 2024 00:31:00.957 read: IOPS=2302, BW=18.0MiB/s (18.9MB/s)(90.0MiB/5003msec) 00:31:00.957 slat (nsec): min=5404, max=35754, avg=7055.94, stdev=1770.02 00:31:00.957 clat (usec): min=1474, max=44730, avg=3455.91, stdev=1274.33 00:31:00.957 lat (usec): min=1483, max=44766, avg=3462.97, stdev=1274.51 00:31:00.957 clat percentiles (usec): 00:31:00.957 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2933], 00:31:00.957 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3490], 00:31:00.957 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 4293], 95.00th=[ 4817], 00:31:00.957 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 5932], 99.95th=[44827], 00:31:00.957 | 99.99th=[44827] 00:31:00.957 bw ( KiB/s): min=16768, max=18768, per=27.90%, avg=18375.11, stdev=624.11, samples=9 00:31:00.957 iops : min= 2096, max= 2346, avg=2296.89, stdev=78.01, samples=9 00:31:00.957 lat (msec) : 2=0.80%, 4=85.77%, 10=13.36%, 50=0.07% 00:31:00.957 cpu : usr=97.10%, sys=2.64%, ctx=8, majf=0, minf=0 00:31:00.957 IO depths : 1=0.3%, 2=1.3%, 4=69.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 issued rwts: total=11517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.957 filename0: (groupid=0, jobs=1): err= 0: pid=1797176: Mon Jul 15 21:07:04 2024 00:31:00.957 read: IOPS=1915, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5002msec) 00:31:00.957 slat (nsec): min=5404, max=36375, avg=6027.27, stdev=1840.08 00:31:00.957 clat (usec): min=2330, max=7140, avg=4159.90, stdev=697.59 00:31:00.957 lat (usec): min=2336, max=7146, avg=4165.93, stdev=697.57 00:31:00.957 clat percentiles (usec): 00:31:00.957 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3556], 00:31:00.957 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 4113], 60.00th=[ 4228], 00:31:00.957 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5407], 00:31:00.957 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 6915], 00:31:00.957 | 99.99th=[ 7111] 00:31:00.957 bw ( KiB/s): min=14976, max=15680, per=23.28%, avg=15331.56, stdev=214.93, samples=9 00:31:00.957 iops : min= 1872, max= 1960, avg=1916.44, stdev=26.87, samples=9 00:31:00.957 lat (msec) : 4=45.00%, 10=55.00% 00:31:00.957 cpu : usr=97.04%, sys=2.72%, ctx=7, majf=0, minf=9 00:31:00.957 IO depths : 1=0.3%, 2=1.5%, 4=70.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 issued rwts: total=9581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.957 filename1: (groupid=0, jobs=1): err= 0: pid=1797177: Mon Jul 15 21:07:04 2024 00:31:00.957 read: IOPS=2042, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5002msec) 00:31:00.957 slat (nsec): min=5405, max=37415, avg=6028.02, stdev=1849.28 00:31:00.957 clat (usec): min=1549, max=7651, avg=3899.16, stdev=635.57 00:31:00.957 lat (usec): min=1563, max=7656, avg=3905.19, stdev=635.47 00:31:00.957 clat percentiles (usec): 00:31:00.957 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3359], 00:31:00.957 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 4015], 00:31:00.957 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:31:00.957 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6194], 99.95th=[ 6390], 00:31:00.957 | 99.99th=[ 6587] 00:31:00.957 bw ( KiB/s): min=16096, max=16512, per=24.83%, avg=16348.44, stdev=128.94, samples=9 00:31:00.957 iops : min= 2012, max= 2064, avg=2043.56, stdev=16.12, samples=9 00:31:00.957 lat (msec) : 2=0.05%, 4=59.36%, 10=40.59% 00:31:00.957 cpu : usr=96.34%, sys=3.42%, ctx=8, majf=0, minf=9 00:31:00.957 IO depths : 1=0.3%, 2=1.6%, 4=69.9%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 issued rwts: total=10219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.957 filename1: (groupid=0, jobs=1): err= 0: pid=1797178: Mon Jul 15 21:07:04 2024 00:31:00.957 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.6MiB/5043msec) 00:31:00.957 slat (nsec): min=5406, max=39536, avg=6033.78, stdev=1840.69 00:31:00.957 clat (usec): min=1752, max=43958, avg=3932.58, stdev=1134.62 00:31:00.957 lat (usec): min=1758, max=43964, avg=3938.62, stdev=1134.59 00:31:00.957 clat percentiles (usec): 00:31:00.957 | 1.00th=[ 2606], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3392], 00:31:00.957 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 4015], 00:31:00.957 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5080], 00:31:00.957 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 6325], 99.95th=[42206], 00:31:00.957 | 99.99th=[42206] 00:31:00.957 bw ( KiB/s): min=16080, max=16528, per=24.75%, avg=16300.80, stdev=154.17, samples=10 00:31:00.957 iops : min= 2010, max= 2066, avg=2037.60, stdev=19.27, samples=10 00:31:00.957 lat (msec) : 2=0.02%, 4=59.17%, 10=40.75%, 50=0.06% 00:31:00.957 cpu : usr=96.63%, sys=3.11%, ctx=8, majf=0, minf=9 00:31:00.957 IO depths : 1=0.3%, 2=1.6%, 4=69.3%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.957 issued rwts: total=10194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.957 00:31:00.957 Run status group 0 (all jobs): 00:31:00.957 READ: bw=64.3MiB/s (67.4MB/s), 15.0MiB/s-18.0MiB/s (15.7MB/s-18.9MB/s), io=324MiB (340MB), run=5002-5043msec 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 00:31:01.219 real 0m24.469s 00:31:01.219 user 5m17.354s 00:31:01.219 sys 0m4.292s 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 ************************************ 00:31:01.219 END TEST fio_dif_rand_params 00:31:01.219 ************************************ 00:31:01.219 21:07:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:01.219 21:07:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:01.219 21:07:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:01.219 21:07:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 ************************************ 00:31:01.219 START TEST fio_dif_digest 00:31:01.219 ************************************ 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 bdev_null0 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.219 [2024-07-15 21:07:05.034078] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:01.219 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.220 { 00:31:01.220 "params": { 00:31:01.220 "name": "Nvme$subsystem", 00:31:01.220 "trtype": "$TEST_TRANSPORT", 00:31:01.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.220 "adrfam": "ipv4", 00:31:01.220 "trsvcid": "$NVMF_PORT", 00:31:01.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.220 "hdgst": ${hdgst:-false}, 00:31:01.220 "ddgst": ${ddgst:-false} 00:31:01.220 }, 00:31:01.220 "method": "bdev_nvme_attach_controller" 00:31:01.220 } 00:31:01.220 EOF 00:31:01.220 )") 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.220 "params": { 00:31:01.220 "name": "Nvme0", 00:31:01.220 "trtype": "tcp", 00:31:01.220 "traddr": "10.0.0.2", 00:31:01.220 "adrfam": "ipv4", 00:31:01.220 "trsvcid": "4420", 00:31:01.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.220 "hdgst": true, 00:31:01.220 "ddgst": true 00:31:01.220 }, 00:31:01.220 "method": "bdev_nvme_attach_controller" 00:31:01.220 }' 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.220 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.518 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.518 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.518 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.518 21:07:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.780 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:01.780 ... 00:31:01.780 fio-3.35 00:31:01.780 Starting 3 threads 00:31:01.780 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.016 00:31:14.016 filename0: (groupid=0, jobs=1): err= 0: pid=1798617: Mon Jul 15 21:07:15 2024 00:31:14.016 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(194MiB/10048msec) 00:31:14.016 slat (nsec): min=5796, max=31724, avg=6596.09, stdev=1201.92 00:31:14.016 clat (usec): min=6899, max=95009, avg=19361.71, stdev=16585.43 00:31:14.016 lat (usec): min=6905, max=95015, avg=19368.31, stdev=16585.50 00:31:14.016 clat percentiles (usec): 00:31:14.016 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10159], 00:31:14.016 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12518], 60.00th=[13304], 00:31:14.016 | 70.00th=[14353], 80.00th=[16057], 90.00th=[52691], 95.00th=[53740], 00:31:14.016 | 99.00th=[56361], 99.50th=[91751], 99.90th=[94897], 99.95th=[94897], 00:31:14.016 | 99.99th=[94897] 00:31:14.016 bw ( KiB/s): min=13824, max=30208, per=35.53%, avg=19852.80, stdev=5004.49, samples=20 00:31:14.016 iops : min= 108, max= 236, avg=155.10, stdev=39.10, samples=20 00:31:14.016 lat (msec) : 10=18.53%, 20=63.71%, 50=0.32%, 100=17.44% 00:31:14.016 cpu : usr=96.18%, sys=3.57%, ctx=25, majf=0, minf=172 00:31:14.016 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.016 issued rwts: total=1554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.016 filename0: (groupid=0, jobs=1): err= 0: pid=1798618: Mon Jul 15 21:07:15 2024 00:31:14.016 read: IOPS=149, BW=18.7MiB/s (19.6MB/s)(188MiB/10022msec) 00:31:14.016 slat (nsec): min=5686, max=36329, avg=7001.35, stdev=1437.31 00:31:14.016 clat (msec): min=6, max=137, avg=20.03, stdev=17.52 00:31:14.016 lat (msec): min=6, max=137, avg=20.04, stdev=17.52 00:31:14.016 clat percentiles (msec): 00:31:14.016 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:14.016 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:31:14.016 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 54], 95.00th=[ 55], 00:31:14.016 | 99.00th=[ 58], 99.50th=[ 95], 99.90th=[ 97], 99.95th=[ 138], 00:31:14.016 | 99.99th=[ 138] 00:31:14.016 bw ( KiB/s): min=11520, max=26880, per=34.29%, avg=19161.60, stdev=4479.15, samples=20 00:31:14.016 iops : min= 90, max= 210, avg=149.70, stdev=34.99, samples=20 00:31:14.016 lat (msec) : 10=16.33%, 20=64.87%, 50=0.40%, 100=18.33%, 250=0.07% 00:31:14.016 cpu : usr=95.98%, sys=3.78%, ctx=18, majf=0, minf=96 00:31:14.016 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.016 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.016 filename0: (groupid=0, jobs=1): err= 0: pid=1798619: Mon Jul 15 21:07:15 2024 00:31:14.016 read: IOPS=132, BW=16.6MiB/s (17.4MB/s)(167MiB/10049msec) 00:31:14.016 slat (nsec): min=5660, max=38188, avg=6456.82, stdev=1427.86 00:31:14.016 clat (usec): min=7092, max=98251, avg=22538.87, stdev=18754.49 00:31:14.016 lat (usec): min=7098, max=98257, avg=22545.32, stdev=18754.49 00:31:14.016 clat percentiles (usec): 00:31:14.017 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10945], 00:31:14.017 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:31:14.017 | 70.00th=[15008], 80.00th=[51643], 90.00th=[53740], 95.00th=[54789], 00:31:14.017 | 99.00th=[57410], 99.50th=[94897], 99.90th=[95945], 99.95th=[98042], 00:31:14.017 | 99.99th=[98042] 00:31:14.017 bw ( KiB/s): min=12288, max=21504, per=30.51%, avg=17049.60, stdev=3190.31, samples=20 00:31:14.017 iops : min= 96, max= 168, avg=133.20, stdev=24.92, samples=20 00:31:14.017 lat (msec) : 10=10.73%, 20=65.34%, 50=0.45%, 100=23.48% 00:31:14.017 cpu : usr=96.27%, sys=3.50%, ctx=16, majf=0, minf=171 00:31:14.017 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.017 issued rwts: total=1333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.017 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:14.017 00:31:14.017 Run status group 0 (all jobs): 00:31:14.017 READ: bw=54.6MiB/s (57.2MB/s), 16.6MiB/s-19.3MiB/s (17.4MB/s-20.3MB/s), io=548MiB (575MB), run=10022-10049msec 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.017 00:31:14.017 real 0m11.122s 00:31:14.017 user 0m44.478s 00:31:14.017 sys 0m1.381s 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.017 21:07:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.017 ************************************ 00:31:14.017 END TEST fio_dif_digest 00:31:14.017 ************************************ 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:14.017 21:07:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:14.017 21:07:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.017 rmmod nvme_tcp 00:31:14.017 rmmod nvme_fabrics 00:31:14.017 rmmod nvme_keyring 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1788222 ']' 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1788222 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1788222 ']' 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1788222 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1788222 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1788222' 00:31:14.017 killing process with pid 1788222 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1788222 00:31:14.017 21:07:16 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1788222 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:14.017 21:07:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:15.928 Waiting for block devices as requested 00:31:15.928 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:15.928 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:15.928 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:16.189 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:16.189 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:16.189 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:16.450 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:16.450 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:16.450 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:16.710 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:16.710 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:16.971 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:16.971 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:16.971 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:16.971 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:17.233 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:17.233 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:17.495 21:07:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.495 21:07:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.495 21:07:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.495 21:07:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.495 21:07:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.495 21:07:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.495 21:07:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.043 21:07:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.043 00:31:20.043 real 1m17.240s 00:31:20.043 user 8m7.446s 00:31:20.043 sys 0m19.780s 00:31:20.043 21:07:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:20.043 21:07:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:20.043 ************************************ 00:31:20.043 END TEST nvmf_dif 00:31:20.043 ************************************ 00:31:20.043 21:07:23 -- common/autotest_common.sh@1142 -- # return 0 00:31:20.043 21:07:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:20.043 21:07:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:20.043 21:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.043 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:31:20.043 ************************************ 00:31:20.043 START TEST nvmf_abort_qd_sizes 00:31:20.043 ************************************ 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:20.043 * Looking for test storage... 00:31:20.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.043 21:07:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.044 21:07:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:26.680 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:26.680 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.680 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:26.681 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:26.681 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.681 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:31:26.942 00:31:26.942 --- 10.0.0.2 ping statistics --- 00:31:26.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.942 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:31:26.942 00:31:26.942 --- 10.0.0.1 ping statistics --- 00:31:26.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.942 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:26.942 21:07:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:30.252 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:30.252 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1807932 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1807932 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1807932 ']' 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.824 21:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.824 [2024-07-15 21:07:34.550759] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:31:30.824 [2024-07-15 21:07:34.550819] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.824 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.824 [2024-07-15 21:07:34.624444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.824 [2024-07-15 21:07:34.700740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.824 [2024-07-15 21:07:34.700781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.824 [2024-07-15 21:07:34.700789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.824 [2024-07-15 21:07:34.700796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.824 [2024-07-15 21:07:34.700802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.824 [2024-07-15 21:07:34.700942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.824 [2024-07-15 21:07:34.701073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.824 [2024-07-15 21:07:34.701225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.824 [2024-07-15 21:07:34.701225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.764 21:07:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:31.764 ************************************ 00:31:31.764 START TEST spdk_target_abort 00:31:31.764 ************************************ 00:31:31.764 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:31.764 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:31.764 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:31.764 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.764 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.024 spdk_targetn1 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.024 [2024-07-15 21:07:35.733213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.024 [2024-07-15 21:07:35.773445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:32.024 21:07:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:32.024 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.283 [2024-07-15 21:07:35.939140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:344 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:32.283 [2024-07-15 21:07:35.939168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:31:32.283 [2024-07-15 21:07:35.945581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:440 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:32.283 [2024-07-15 21:07:35.945596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:31:32.283 [2024-07-15 21:07:35.945741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:456 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:32.283 [2024-07-15 21:07:35.945750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:31:32.283 [2024-07-15 21:07:35.961620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:904 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:32.283 [2024-07-15 21:07:35.961635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:31:32.283 [2024-07-15 21:07:36.016620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2504 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:32.283 [2024-07-15 21:07:36.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.584 Initializing NVMe Controllers 00:31:35.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:35.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:35.584 Initialization complete. Launching workers. 00:31:35.584 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10533, failed: 5 00:31:35.584 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3249, failed to submit 7289 00:31:35.584 success 769, unsuccess 2480, failed 0 00:31:35.584 21:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:35.584 21:07:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.584 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.584 [2024-07-15 21:07:39.219255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2504 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:35.584 [2024-07-15 21:07:39.219287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.844 [2024-07-15 21:07:39.697524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:13504 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:35.844 [2024-07-15 21:07:39.697553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00a0 p:1 m:0 dnr:0 00:31:38.387 Initializing NVMe Controllers 00:31:38.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.387 Initialization complete. Launching workers. 00:31:38.387 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8559, failed: 2 00:31:38.387 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7345 00:31:38.387 success 391, unsuccess 825, failed 0 00:31:38.387 21:07:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.387 21:07:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.647 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.647 [2024-07-15 21:07:42.378849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:167 nsid:1 lba:768 len:8 PRP1 0x200007922000 PRP2 0x0 00:31:38.647 [2024-07-15 21:07:42.378895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:167 cdw0:0 sqhd:0098 p:0 m:0 dnr:0 00:31:41.950 Initializing NVMe Controllers 00:31:41.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:41.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:41.950 Initialization complete. Launching workers. 00:31:41.950 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41805, failed: 1 00:31:41.950 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2490, failed to submit 39316 00:31:41.950 success 604, unsuccess 1886, failed 0 00:31:41.950 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:41.950 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.950 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:41.950 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.950 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:41.951 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.951 21:07:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1807932 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1807932 ']' 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1807932 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1807932 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1807932' 00:31:43.866 killing process with pid 1807932 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1807932 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1807932 00:31:43.866 00:31:43.866 real 0m12.028s 00:31:43.866 user 0m48.681s 00:31:43.866 sys 0m2.041s 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.866 ************************************ 00:31:43.866 END TEST spdk_target_abort 00:31:43.866 ************************************ 00:31:43.866 21:07:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:43.866 21:07:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:43.866 21:07:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:43.866 21:07:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.866 21:07:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.866 ************************************ 00:31:43.866 START TEST kernel_target_abort 00:31:43.866 ************************************ 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:43.866 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:43.867 21:07:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:47.186 Waiting for block devices as requested 00:31:47.186 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:47.186 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:47.186 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:47.448 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:47.448 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:47.448 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:47.708 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:47.708 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:47.708 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:47.969 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:47.969 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:47.969 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:48.230 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:48.230 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:48.230 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:48.230 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:48.491 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:48.752 No valid GPT data, bailing 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:48.752 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:49.013 00:31:49.013 Discovery Log Number of Records 2, Generation counter 2 00:31:49.013 =====Discovery Log Entry 0====== 00:31:49.013 trtype: tcp 00:31:49.013 adrfam: ipv4 00:31:49.013 subtype: current discovery subsystem 00:31:49.013 treq: not specified, sq flow control disable supported 00:31:49.013 portid: 1 00:31:49.013 trsvcid: 4420 00:31:49.013 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:49.013 traddr: 10.0.0.1 00:31:49.013 eflags: none 00:31:49.013 sectype: none 00:31:49.013 =====Discovery Log Entry 1====== 00:31:49.013 trtype: tcp 00:31:49.013 adrfam: ipv4 00:31:49.013 subtype: nvme subsystem 00:31:49.013 treq: not specified, sq flow control disable supported 00:31:49.013 portid: 1 00:31:49.013 trsvcid: 4420 00:31:49.013 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:49.013 traddr: 10.0.0.1 00:31:49.013 eflags: none 00:31:49.013 sectype: none 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:49.013 21:07:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:49.013 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.307 Initializing NVMe Controllers 00:31:52.307 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:52.307 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:52.307 Initialization complete. Launching workers. 00:31:52.307 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45694, failed: 0 00:31:52.307 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45694, failed to submit 0 00:31:52.307 success 0, unsuccess 45694, failed 0 00:31:52.307 21:07:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.307 21:07:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.307 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.605 Initializing NVMe Controllers 00:31:55.605 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:55.605 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:55.605 Initialization complete. Launching workers. 00:31:55.605 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86115, failed: 0 00:31:55.605 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21682, failed to submit 64433 00:31:55.605 success 0, unsuccess 21682, failed 0 00:31:55.605 21:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.605 21:07:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.605 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.151 Initializing NVMe Controllers 00:31:58.151 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.151 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.151 Initialization complete. Launching workers. 00:31:58.151 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82968, failed: 0 00:31:58.152 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20730, failed to submit 62238 00:31:58.152 success 0, unsuccess 20730, failed 0 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:58.152 21:08:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:58.152 21:08:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:01.455 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:01.455 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:01.455 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:01.455 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:01.455 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:01.716 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:03.632 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:03.893 00:32:03.893 real 0m20.004s 00:32:03.893 user 0m7.879s 00:32:03.893 sys 0m6.444s 00:32:03.893 21:08:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.893 21:08:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:03.893 ************************************ 00:32:03.893 END TEST kernel_target_abort 00:32:03.893 ************************************ 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:03.893 rmmod nvme_tcp 00:32:03.893 rmmod nvme_fabrics 00:32:03.893 rmmod nvme_keyring 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1807932 ']' 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1807932 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1807932 ']' 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1807932 00:32:03.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1807932) - No such process 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1807932 is not found' 00:32:03.893 Process with pid 1807932 is not found 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:03.893 21:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:07.196 Waiting for block devices as requested 00:32:07.196 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:07.196 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:07.457 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:07.457 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:07.457 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:07.457 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:07.718 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:07.718 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:07.718 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:07.989 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:07.989 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:08.257 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:08.257 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:08.257 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:08.257 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:08.517 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:08.517 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:08.777 21:08:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.319 21:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:11.319 00:32:11.319 real 0m51.198s 00:32:11.319 user 1m1.791s 00:32:11.319 sys 0m19.047s 00:32:11.319 21:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:11.319 21:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:11.319 ************************************ 00:32:11.319 END TEST nvmf_abort_qd_sizes 00:32:11.319 ************************************ 00:32:11.319 21:08:14 -- common/autotest_common.sh@1142 -- # return 0 00:32:11.319 21:08:14 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:11.319 21:08:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:11.319 21:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.319 21:08:14 -- common/autotest_common.sh@10 -- # set +x 00:32:11.319 ************************************ 00:32:11.319 START TEST keyring_file 00:32:11.319 ************************************ 00:32:11.319 21:08:14 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:11.319 * Looking for test storage... 00:32:11.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.320 21:08:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.320 21:08:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.320 21:08:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.320 21:08:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.320 21:08:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.320 21:08:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.320 21:08:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:11.320 21:08:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZDTKY7Ug2B 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZDTKY7Ug2B 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZDTKY7Ug2B 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZDTKY7Ug2B 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QWeLzQDsXD 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:11.320 21:08:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QWeLzQDsXD 00:32:11.320 21:08:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QWeLzQDsXD 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QWeLzQDsXD 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=1818189 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1818189 00:32:11.320 21:08:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1818189 ']' 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.320 21:08:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:11.320 [2024-07-15 21:08:14.980289] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:32:11.320 [2024-07-15 21:08:14.980362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818189 ] 00:32:11.320 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.320 [2024-07-15 21:08:15.044523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.320 [2024-07-15 21:08:15.123498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.891 21:08:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.891 21:08:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:11.891 21:08:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:11.891 21:08:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.891 21:08:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:11.891 [2024-07-15 21:08:15.744922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.891 null0 00:32:11.891 [2024-07-15 21:08:15.776970] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:11.891 [2024-07-15 21:08:15.777194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:12.152 [2024-07-15 21:08:15.784974] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.152 21:08:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.152 21:08:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:12.153 [2024-07-15 21:08:15.801019] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:12.153 request: 00:32:12.153 { 00:32:12.153 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:12.153 "secure_channel": false, 00:32:12.153 "listen_address": { 00:32:12.153 "trtype": "tcp", 00:32:12.153 "traddr": "127.0.0.1", 00:32:12.153 "trsvcid": "4420" 00:32:12.153 }, 00:32:12.153 "method": "nvmf_subsystem_add_listener", 00:32:12.153 "req_id": 1 00:32:12.153 } 00:32:12.153 Got JSON-RPC error response 00:32:12.153 response: 00:32:12.153 { 00:32:12.153 "code": -32602, 00:32:12.153 "message": "Invalid parameters" 00:32:12.153 } 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:12.153 21:08:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=1818311 00:32:12.153 21:08:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1818311 /var/tmp/bperf.sock 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1818311 ']' 00:32:12.153 21:08:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:12.153 21:08:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:12.153 [2024-07-15 21:08:15.856555] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:32:12.153 [2024-07-15 21:08:15.856602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818311 ] 00:32:12.153 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.153 [2024-07-15 21:08:15.930486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.153 [2024-07-15 21:08:15.995483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.095 21:08:16 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:13.095 21:08:16 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:13.095 21:08:16 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:13.095 21:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:13.096 21:08:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QWeLzQDsXD 00:32:13.096 21:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QWeLzQDsXD 00:32:13.096 21:08:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:13.096 21:08:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:13.096 21:08:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.096 21:08:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.096 21:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.356 21:08:17 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ZDTKY7Ug2B == \/\t\m\p\/\t\m\p\.\Z\D\T\K\Y\7\U\g\2\B ]] 00:32:13.356 21:08:17 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:13.356 21:08:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:13.356 21:08:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.356 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.357 21:08:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:13.616 21:08:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QWeLzQDsXD == \/\t\m\p\/\t\m\p\.\Q\W\e\L\z\Q\D\s\X\D ]] 00:32:13.616 21:08:17 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.616 21:08:17 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:13.616 21:08:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.616 21:08:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:13.876 21:08:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:13.876 21:08:17 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.876 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:13.876 [2024-07-15 21:08:17.716036] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:14.136 nvme0n1 00:32:14.136 21:08:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.136 21:08:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:14.136 21:08:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.136 21:08:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.396 21:08:18 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:14.396 21:08:18 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:14.396 Running I/O for 1 seconds... 00:32:15.781 00:32:15.781 Latency(us) 00:32:15.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.781 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:15.781 nvme0n1 : 1.02 6923.56 27.05 0.00 0.00 18318.24 7372.80 25449.81 00:32:15.781 =================================================================================================================== 00:32:15.781 Total : 6923.56 27.05 0.00 0.00 18318.24 7372.80 25449.81 00:32:15.781 0 00:32:15.781 21:08:19 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:15.781 21:08:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.781 21:08:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:15.781 21:08:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.781 21:08:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:16.043 21:08:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:16.043 21:08:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:16.043 [2024-07-15 21:08:19.871031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:16.043 [2024-07-15 21:08:19.871714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62f9d0 (107): Transport endpoint is not connected 00:32:16.043 [2024-07-15 21:08:19.872709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62f9d0 (9): Bad file descriptor 00:32:16.043 [2024-07-15 21:08:19.873711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:16.043 [2024-07-15 21:08:19.873718] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:16.043 [2024-07-15 21:08:19.873724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:16.043 request: 00:32:16.043 { 00:32:16.043 "name": "nvme0", 00:32:16.043 "trtype": "tcp", 00:32:16.043 "traddr": "127.0.0.1", 00:32:16.043 "adrfam": "ipv4", 00:32:16.043 "trsvcid": "4420", 00:32:16.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.043 "prchk_reftag": false, 00:32:16.043 "prchk_guard": false, 00:32:16.043 "hdgst": false, 00:32:16.043 "ddgst": false, 00:32:16.043 "psk": "key1", 00:32:16.043 "method": "bdev_nvme_attach_controller", 00:32:16.043 "req_id": 1 00:32:16.043 } 00:32:16.043 Got JSON-RPC error response 00:32:16.043 response: 00:32:16.043 { 00:32:16.043 "code": -5, 00:32:16.043 "message": "Input/output error" 00:32:16.043 } 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:16.043 21:08:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:16.043 21:08:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.043 21:08:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.303 21:08:20 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:16.303 21:08:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:16.303 21:08:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:16.303 21:08:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.304 21:08:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.304 21:08:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:16.304 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.564 21:08:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:16.564 21:08:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:16.564 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:16.564 21:08:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:16.564 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:16.825 21:08:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:16.825 21:08:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:16.825 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.825 21:08:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:16.825 21:08:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ZDTKY7Ug2B 00:32:16.825 21:08:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.825 21:08:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:16.825 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:17.085 [2024-07-15 21:08:20.830464] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZDTKY7Ug2B': 0100660 00:32:17.085 [2024-07-15 21:08:20.830483] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:17.085 request: 00:32:17.085 { 00:32:17.085 "name": "key0", 00:32:17.085 "path": "/tmp/tmp.ZDTKY7Ug2B", 00:32:17.085 "method": "keyring_file_add_key", 00:32:17.085 "req_id": 1 00:32:17.085 } 00:32:17.085 Got JSON-RPC error response 00:32:17.085 response: 00:32:17.085 { 00:32:17.085 "code": -1, 00:32:17.085 "message": "Operation not permitted" 00:32:17.085 } 00:32:17.085 21:08:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:17.085 21:08:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.085 21:08:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.085 21:08:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.085 21:08:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ZDTKY7Ug2B 00:32:17.085 21:08:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:17.085 21:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZDTKY7Ug2B 00:32:17.346 21:08:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ZDTKY7Ug2B 00:32:17.346 21:08:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.346 21:08:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:17.346 21:08:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.346 21:08:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.346 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.607 [2024-07-15 21:08:21.335753] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZDTKY7Ug2B': No such file or directory 00:32:17.607 [2024-07-15 21:08:21.335767] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:17.607 [2024-07-15 21:08:21.335783] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:17.607 [2024-07-15 21:08:21.335788] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:17.607 [2024-07-15 21:08:21.335793] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:17.607 request: 00:32:17.607 { 00:32:17.607 "name": "nvme0", 00:32:17.607 "trtype": "tcp", 00:32:17.607 "traddr": "127.0.0.1", 00:32:17.607 "adrfam": "ipv4", 00:32:17.607 "trsvcid": "4420", 00:32:17.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.607 "prchk_reftag": false, 00:32:17.607 "prchk_guard": false, 00:32:17.607 "hdgst": false, 00:32:17.607 "ddgst": false, 00:32:17.607 "psk": "key0", 00:32:17.607 "method": "bdev_nvme_attach_controller", 00:32:17.607 "req_id": 1 00:32:17.607 } 00:32:17.607 Got JSON-RPC error response 00:32:17.607 response: 00:32:17.607 { 00:32:17.607 "code": -19, 00:32:17.607 "message": "No such device" 00:32:17.607 } 00:32:17.607 21:08:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:17.607 21:08:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.607 21:08:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.607 21:08:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.607 21:08:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:17.607 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:17.869 21:08:21 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:17.869 21:08:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ov9kbTzARW 00:32:17.869 21:08:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.869 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:18.130 nvme0n1 00:32:18.130 21:08:21 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:18.130 21:08:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:18.130 21:08:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.130 21:08:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.130 21:08:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.130 21:08:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.391 21:08:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:18.391 21:08:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:18.391 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:18.391 21:08:22 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:18.391 21:08:22 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:18.391 21:08:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.391 21:08:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.391 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.652 21:08:22 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:18.652 21:08:22 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:18.652 21:08:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:18.652 21:08:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.652 21:08:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.652 21:08:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.652 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.913 21:08:22 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:18.913 21:08:22 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:18.913 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:18.913 21:08:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:18.913 21:08:22 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:18.913 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.174 21:08:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:19.174 21:08:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ov9kbTzARW 00:32:19.174 21:08:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ov9kbTzARW 00:32:19.174 21:08:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QWeLzQDsXD 00:32:19.174 21:08:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QWeLzQDsXD 00:32:19.435 21:08:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.435 21:08:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.695 nvme0n1 00:32:19.695 21:08:23 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:19.695 21:08:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:19.956 21:08:23 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:19.956 "subsystems": [ 00:32:19.956 { 00:32:19.956 "subsystem": "keyring", 00:32:19.956 "config": [ 00:32:19.956 { 00:32:19.956 "method": "keyring_file_add_key", 00:32:19.956 "params": { 00:32:19.956 "name": "key0", 00:32:19.956 "path": "/tmp/tmp.Ov9kbTzARW" 00:32:19.956 } 00:32:19.956 }, 00:32:19.956 { 00:32:19.956 "method": "keyring_file_add_key", 00:32:19.956 "params": { 00:32:19.956 "name": "key1", 00:32:19.956 "path": "/tmp/tmp.QWeLzQDsXD" 00:32:19.956 } 00:32:19.956 } 00:32:19.956 ] 00:32:19.956 }, 00:32:19.956 { 00:32:19.956 "subsystem": "iobuf", 00:32:19.956 "config": [ 00:32:19.956 { 00:32:19.956 "method": "iobuf_set_options", 00:32:19.956 "params": { 00:32:19.956 "small_pool_count": 8192, 00:32:19.956 "large_pool_count": 1024, 00:32:19.956 "small_bufsize": 8192, 00:32:19.956 "large_bufsize": 135168 00:32:19.956 } 00:32:19.956 } 00:32:19.956 ] 00:32:19.956 }, 00:32:19.956 { 00:32:19.956 "subsystem": "sock", 00:32:19.956 "config": [ 00:32:19.956 { 00:32:19.956 "method": "sock_set_default_impl", 00:32:19.956 "params": { 00:32:19.956 "impl_name": "posix" 00:32:19.956 } 00:32:19.956 }, 00:32:19.956 { 00:32:19.956 "method": "sock_impl_set_options", 00:32:19.956 "params": { 00:32:19.956 "impl_name": "ssl", 00:32:19.956 "recv_buf_size": 4096, 00:32:19.956 "send_buf_size": 4096, 00:32:19.956 "enable_recv_pipe": true, 00:32:19.956 "enable_quickack": false, 00:32:19.956 "enable_placement_id": 0, 00:32:19.956 "enable_zerocopy_send_server": true, 00:32:19.956 "enable_zerocopy_send_client": false, 00:32:19.956 "zerocopy_threshold": 0, 00:32:19.956 "tls_version": 0, 00:32:19.956 "enable_ktls": false 00:32:19.956 } 00:32:19.956 }, 00:32:19.956 { 00:32:19.956 "method": "sock_impl_set_options", 00:32:19.956 "params": { 00:32:19.956 "impl_name": "posix", 00:32:19.956 "recv_buf_size": 2097152, 00:32:19.956 "send_buf_size": 2097152, 00:32:19.956 "enable_recv_pipe": true, 00:32:19.957 "enable_quickack": false, 00:32:19.957 "enable_placement_id": 0, 00:32:19.957 "enable_zerocopy_send_server": true, 00:32:19.957 "enable_zerocopy_send_client": false, 00:32:19.957 "zerocopy_threshold": 0, 00:32:19.957 "tls_version": 0, 00:32:19.957 "enable_ktls": false 00:32:19.957 } 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "vmd", 00:32:19.957 "config": [] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "accel", 00:32:19.957 "config": [ 00:32:19.957 { 00:32:19.957 "method": "accel_set_options", 00:32:19.957 "params": { 00:32:19.957 "small_cache_size": 128, 00:32:19.957 "large_cache_size": 16, 00:32:19.957 "task_count": 2048, 00:32:19.957 "sequence_count": 2048, 00:32:19.957 "buf_count": 2048 00:32:19.957 } 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "bdev", 00:32:19.957 "config": [ 00:32:19.957 { 00:32:19.957 "method": "bdev_set_options", 00:32:19.957 "params": { 00:32:19.957 "bdev_io_pool_size": 65535, 00:32:19.957 "bdev_io_cache_size": 256, 00:32:19.957 "bdev_auto_examine": true, 00:32:19.957 "iobuf_small_cache_size": 128, 00:32:19.957 "iobuf_large_cache_size": 16 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_raid_set_options", 00:32:19.957 "params": { 00:32:19.957 "process_window_size_kb": 1024 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_iscsi_set_options", 00:32:19.957 "params": { 00:32:19.957 "timeout_sec": 30 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_nvme_set_options", 00:32:19.957 "params": { 00:32:19.957 "action_on_timeout": "none", 00:32:19.957 "timeout_us": 0, 00:32:19.957 "timeout_admin_us": 0, 00:32:19.957 "keep_alive_timeout_ms": 10000, 00:32:19.957 "arbitration_burst": 0, 00:32:19.957 "low_priority_weight": 0, 00:32:19.957 "medium_priority_weight": 0, 00:32:19.957 "high_priority_weight": 0, 00:32:19.957 "nvme_adminq_poll_period_us": 10000, 00:32:19.957 "nvme_ioq_poll_period_us": 0, 00:32:19.957 "io_queue_requests": 512, 00:32:19.957 "delay_cmd_submit": true, 00:32:19.957 "transport_retry_count": 4, 00:32:19.957 "bdev_retry_count": 3, 00:32:19.957 "transport_ack_timeout": 0, 00:32:19.957 "ctrlr_loss_timeout_sec": 0, 00:32:19.957 "reconnect_delay_sec": 0, 00:32:19.957 "fast_io_fail_timeout_sec": 0, 00:32:19.957 "disable_auto_failback": false, 00:32:19.957 "generate_uuids": false, 00:32:19.957 "transport_tos": 0, 00:32:19.957 "nvme_error_stat": false, 00:32:19.957 "rdma_srq_size": 0, 00:32:19.957 "io_path_stat": false, 00:32:19.957 "allow_accel_sequence": false, 00:32:19.957 "rdma_max_cq_size": 0, 00:32:19.957 "rdma_cm_event_timeout_ms": 0, 00:32:19.957 "dhchap_digests": [ 00:32:19.957 "sha256", 00:32:19.957 "sha384", 00:32:19.957 "sha512" 00:32:19.957 ], 00:32:19.957 "dhchap_dhgroups": [ 00:32:19.957 "null", 00:32:19.957 "ffdhe2048", 00:32:19.957 "ffdhe3072", 00:32:19.957 "ffdhe4096", 00:32:19.957 "ffdhe6144", 00:32:19.957 "ffdhe8192" 00:32:19.957 ] 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_nvme_attach_controller", 00:32:19.957 "params": { 00:32:19.957 "name": "nvme0", 00:32:19.957 "trtype": "TCP", 00:32:19.957 "adrfam": "IPv4", 00:32:19.957 "traddr": "127.0.0.1", 00:32:19.957 "trsvcid": "4420", 00:32:19.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.957 "prchk_reftag": false, 00:32:19.957 "prchk_guard": false, 00:32:19.957 "ctrlr_loss_timeout_sec": 0, 00:32:19.957 "reconnect_delay_sec": 0, 00:32:19.957 "fast_io_fail_timeout_sec": 0, 00:32:19.957 "psk": "key0", 00:32:19.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.957 "hdgst": false, 00:32:19.957 "ddgst": false 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_nvme_set_hotplug", 00:32:19.957 "params": { 00:32:19.957 "period_us": 100000, 00:32:19.957 "enable": false 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "bdev_wait_for_examine" 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "nbd", 00:32:19.957 "config": [] 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }' 00:32:19.957 21:08:23 keyring_file -- keyring/file.sh@114 -- # killprocess 1818311 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1818311 ']' 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1818311 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818311 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818311' 00:32:19.957 killing process with pid 1818311 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1818311 00:32:19.957 Received shutdown signal, test time was about 1.000000 seconds 00:32:19.957 00:32:19.957 Latency(us) 00:32:19.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.957 =================================================================================================================== 00:32:19.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1818311 00:32:19.957 21:08:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=1819989 00:32:19.957 21:08:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1819989 /var/tmp/bperf.sock 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1819989 ']' 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.957 21:08:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.957 21:08:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:19.957 21:08:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:19.957 "subsystems": [ 00:32:19.957 { 00:32:19.957 "subsystem": "keyring", 00:32:19.957 "config": [ 00:32:19.957 { 00:32:19.957 "method": "keyring_file_add_key", 00:32:19.957 "params": { 00:32:19.957 "name": "key0", 00:32:19.957 "path": "/tmp/tmp.Ov9kbTzARW" 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "keyring_file_add_key", 00:32:19.957 "params": { 00:32:19.957 "name": "key1", 00:32:19.957 "path": "/tmp/tmp.QWeLzQDsXD" 00:32:19.957 } 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "iobuf", 00:32:19.957 "config": [ 00:32:19.957 { 00:32:19.957 "method": "iobuf_set_options", 00:32:19.957 "params": { 00:32:19.957 "small_pool_count": 8192, 00:32:19.957 "large_pool_count": 1024, 00:32:19.957 "small_bufsize": 8192, 00:32:19.957 "large_bufsize": 135168 00:32:19.957 } 00:32:19.957 } 00:32:19.957 ] 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "subsystem": "sock", 00:32:19.957 "config": [ 00:32:19.957 { 00:32:19.957 "method": "sock_set_default_impl", 00:32:19.957 "params": { 00:32:19.957 "impl_name": "posix" 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "sock_impl_set_options", 00:32:19.957 "params": { 00:32:19.957 "impl_name": "ssl", 00:32:19.957 "recv_buf_size": 4096, 00:32:19.957 "send_buf_size": 4096, 00:32:19.957 "enable_recv_pipe": true, 00:32:19.957 "enable_quickack": false, 00:32:19.957 "enable_placement_id": 0, 00:32:19.957 "enable_zerocopy_send_server": true, 00:32:19.957 "enable_zerocopy_send_client": false, 00:32:19.957 "zerocopy_threshold": 0, 00:32:19.957 "tls_version": 0, 00:32:19.957 "enable_ktls": false 00:32:19.957 } 00:32:19.957 }, 00:32:19.957 { 00:32:19.957 "method": "sock_impl_set_options", 00:32:19.957 "params": { 00:32:19.957 "impl_name": "posix", 00:32:19.958 "recv_buf_size": 2097152, 00:32:19.958 "send_buf_size": 2097152, 00:32:19.958 "enable_recv_pipe": true, 00:32:19.958 "enable_quickack": false, 00:32:19.958 "enable_placement_id": 0, 00:32:19.958 "enable_zerocopy_send_server": true, 00:32:19.958 "enable_zerocopy_send_client": false, 00:32:19.958 "zerocopy_threshold": 0, 00:32:19.958 "tls_version": 0, 00:32:19.958 "enable_ktls": false 00:32:19.958 } 00:32:19.958 } 00:32:19.958 ] 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "subsystem": "vmd", 00:32:19.958 "config": [] 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "subsystem": "accel", 00:32:19.958 "config": [ 00:32:19.958 { 00:32:19.958 "method": "accel_set_options", 00:32:19.958 "params": { 00:32:19.958 "small_cache_size": 128, 00:32:19.958 "large_cache_size": 16, 00:32:19.958 "task_count": 2048, 00:32:19.958 "sequence_count": 2048, 00:32:19.958 "buf_count": 2048 00:32:19.958 } 00:32:19.958 } 00:32:19.958 ] 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "subsystem": "bdev", 00:32:19.958 "config": [ 00:32:19.958 { 00:32:19.958 "method": "bdev_set_options", 00:32:19.958 "params": { 00:32:19.958 "bdev_io_pool_size": 65535, 00:32:19.958 "bdev_io_cache_size": 256, 00:32:19.958 "bdev_auto_examine": true, 00:32:19.958 "iobuf_small_cache_size": 128, 00:32:19.958 "iobuf_large_cache_size": 16 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_raid_set_options", 00:32:19.958 "params": { 00:32:19.958 "process_window_size_kb": 1024 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_iscsi_set_options", 00:32:19.958 "params": { 00:32:19.958 "timeout_sec": 30 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_nvme_set_options", 00:32:19.958 "params": { 00:32:19.958 "action_on_timeout": "none", 00:32:19.958 "timeout_us": 0, 00:32:19.958 "timeout_admin_us": 0, 00:32:19.958 "keep_alive_timeout_ms": 10000, 00:32:19.958 "arbitration_burst": 0, 00:32:19.958 "low_priority_weight": 0, 00:32:19.958 "medium_priority_weight": 0, 00:32:19.958 "high_priority_weight": 0, 00:32:19.958 "nvme_adminq_poll_period_us": 10000, 00:32:19.958 "nvme_ioq_poll_period_us": 0, 00:32:19.958 "io_queue_requests": 512, 00:32:19.958 "delay_cmd_submit": true, 00:32:19.958 "transport_retry_count": 4, 00:32:19.958 "bdev_retry_count": 3, 00:32:19.958 "transport_ack_timeout": 0, 00:32:19.958 "ctrlr_loss_timeout_sec": 0, 00:32:19.958 "reconnect_delay_sec": 0, 00:32:19.958 "fast_io_fail_timeout_sec": 0, 00:32:19.958 "disable_auto_failback": false, 00:32:19.958 "generate_uuids": false, 00:32:19.958 "transport_tos": 0, 00:32:19.958 "nvme_error_stat": false, 00:32:19.958 "rdma_srq_size": 0, 00:32:19.958 "io_path_stat": false, 00:32:19.958 "allow_accel_sequence": false, 00:32:19.958 "rdma_max_cq_size": 0, 00:32:19.958 "rdma_cm_event_timeout_ms": 0, 00:32:19.958 "dhchap_digests": [ 00:32:19.958 "sha256", 00:32:19.958 "sha384", 00:32:19.958 "sha512" 00:32:19.958 ], 00:32:19.958 "dhchap_dhgroups": [ 00:32:19.958 "null", 00:32:19.958 "ffdhe2048", 00:32:19.958 "ffdhe3072", 00:32:19.958 "ffdhe4096", 00:32:19.958 "ffdhe6144", 00:32:19.958 "ffdhe8192" 00:32:19.958 ] 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_nvme_attach_controller", 00:32:19.958 "params": { 00:32:19.958 "name": "nvme0", 00:32:19.958 "trtype": "TCP", 00:32:19.958 "adrfam": "IPv4", 00:32:19.958 "traddr": "127.0.0.1", 00:32:19.958 "trsvcid": "4420", 00:32:19.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.958 "prchk_reftag": false, 00:32:19.958 "prchk_guard": false, 00:32:19.958 "ctrlr_loss_timeout_sec": 0, 00:32:19.958 "reconnect_delay_sec": 0, 00:32:19.958 "fast_io_fail_timeout_sec": 0, 00:32:19.958 "psk": "key0", 00:32:19.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.958 "hdgst": false, 00:32:19.958 "ddgst": false 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_nvme_set_hotplug", 00:32:19.958 "params": { 00:32:19.958 "period_us": 100000, 00:32:19.958 "enable": false 00:32:19.958 } 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "method": "bdev_wait_for_examine" 00:32:19.958 } 00:32:19.958 ] 00:32:19.958 }, 00:32:19.958 { 00:32:19.958 "subsystem": "nbd", 00:32:19.958 "config": [] 00:32:19.958 } 00:32:19.958 ] 00:32:19.958 }' 00:32:20.218 [2024-07-15 21:08:23.857903] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:32:20.218 [2024-07-15 21:08:23.857960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819989 ] 00:32:20.218 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.218 [2024-07-15 21:08:23.931591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.218 [2024-07-15 21:08:23.985296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.477 [2024-07-15 21:08:24.126706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:20.737 21:08:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.737 21:08:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:20.737 21:08:24 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:20.737 21:08:24 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:20.737 21:08:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.999 21:08:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:20.999 21:08:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:20.999 21:08:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.999 21:08:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.999 21:08:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.999 21:08:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.999 21:08:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.260 21:08:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:21.260 21:08:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:21.260 21:08:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:21.260 21:08:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.260 21:08:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.260 21:08:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.260 21:08:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.260 21:08:25 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:21.260 21:08:25 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:21.260 21:08:25 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:21.260 21:08:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:21.521 21:08:25 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:21.521 21:08:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:21.521 21:08:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ov9kbTzARW /tmp/tmp.QWeLzQDsXD 00:32:21.521 21:08:25 keyring_file -- keyring/file.sh@20 -- # killprocess 1819989 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1819989 ']' 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1819989 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1819989 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1819989' 00:32:21.521 killing process with pid 1819989 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@967 -- # kill 1819989 00:32:21.521 Received shutdown signal, test time was about 1.000000 seconds 00:32:21.521 00:32:21.521 Latency(us) 00:32:21.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.521 =================================================================================================================== 00:32:21.521 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:21.521 21:08:25 keyring_file -- common/autotest_common.sh@972 -- # wait 1819989 00:32:21.782 21:08:25 keyring_file -- keyring/file.sh@21 -- # killprocess 1818189 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1818189 ']' 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1818189 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818189 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818189' 00:32:21.782 killing process with pid 1818189 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@967 -- # kill 1818189 00:32:21.782 [2024-07-15 21:08:25.502648] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:21.782 21:08:25 keyring_file -- common/autotest_common.sh@972 -- # wait 1818189 00:32:22.043 00:32:22.043 real 0m11.040s 00:32:22.043 user 0m25.858s 00:32:22.043 sys 0m2.555s 00:32:22.043 21:08:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.043 21:08:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:22.043 ************************************ 00:32:22.043 END TEST keyring_file 00:32:22.043 ************************************ 00:32:22.043 21:08:25 -- common/autotest_common.sh@1142 -- # return 0 00:32:22.043 21:08:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:22.043 21:08:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:22.043 21:08:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:22.043 21:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.043 21:08:25 -- common/autotest_common.sh@10 -- # set +x 00:32:22.043 ************************************ 00:32:22.043 START TEST keyring_linux 00:32:22.043 ************************************ 00:32:22.043 21:08:25 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:22.043 * Looking for test storage... 00:32:22.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:22.043 21:08:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:22.043 21:08:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.043 21:08:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.043 21:08:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.043 21:08:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.043 21:08:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.043 21:08:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.043 21:08:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.043 21:08:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:22.043 21:08:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:22.043 21:08:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:22.043 21:08:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:22.044 21:08:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:22.044 21:08:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:22.044 21:08:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:22.305 /tmp/:spdk-test:key0 00:32:22.305 21:08:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:22.305 21:08:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:22.305 21:08:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:22.305 21:08:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:22.305 21:08:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:22.305 /tmp/:spdk-test:key1 00:32:22.305 21:08:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1820539 00:32:22.305 21:08:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1820539 00:32:22.305 21:08:26 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1820539 ']' 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:22.305 21:08:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:22.305 [2024-07-15 21:08:26.076080] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:32:22.305 [2024-07-15 21:08:26.076161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820539 ] 00:32:22.305 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.305 [2024-07-15 21:08:26.139890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.566 [2024-07-15 21:08:26.214639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:23.139 [2024-07-15 21:08:26.840803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.139 null0 00:32:23.139 [2024-07-15 21:08:26.872853] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:23.139 [2024-07-15 21:08:26.873237] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:23.139 627968630 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:23.139 802647755 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1820564 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1820564 /var/tmp/bperf.sock 00:32:23.139 21:08:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1820564 ']' 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.139 21:08:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:23.139 [2024-07-15 21:08:26.948618] Starting SPDK v24.09-pre git sha1 06cc9fb0c / DPDK 24.03.0 initialization... 00:32:23.139 [2024-07-15 21:08:26.948665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820564 ] 00:32:23.139 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.139 [2024-07-15 21:08:27.021949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.451 [2024-07-15 21:08:27.075893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.021 21:08:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.021 21:08:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:24.021 21:08:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:24.021 21:08:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:24.021 21:08:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:24.021 21:08:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:24.280 21:08:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:24.280 21:08:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:24.280 [2024-07-15 21:08:28.162294] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:24.540 nvme0n1 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:24.540 21:08:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:24.540 21:08:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:24.540 21:08:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.540 21:08:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:24.540 21:08:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@25 -- # sn=627968630 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 627968630 == \6\2\7\9\6\8\6\3\0 ]] 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 627968630 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:24.800 21:08:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.800 Running I/O for 1 seconds... 00:32:26.183 00:32:26.184 Latency(us) 00:32:26.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.184 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:26.184 nvme0n1 : 1.01 9241.43 36.10 0.00 0.00 13743.60 11141.12 25231.36 00:32:26.184 =================================================================================================================== 00:32:26.184 Total : 9241.43 36.10 0.00 0.00 13743.60 11141.12 25231.36 00:32:26.184 0 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:26.184 21:08:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:26.184 21:08:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:26.184 21:08:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.184 21:08:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:26.184 21:08:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:26.184 21:08:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:26.184 21:08:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:26.184 21:08:30 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:26.184 21:08:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:26.444 [2024-07-15 21:08:30.155748] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:26.444 [2024-07-15 21:08:30.156011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2d950 (107): Transport endpoint is not connected 00:32:26.444 [2024-07-15 21:08:30.157006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2d950 (9): Bad file descriptor 00:32:26.444 [2024-07-15 21:08:30.158007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:26.444 [2024-07-15 21:08:30.158014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:26.444 [2024-07-15 21:08:30.158020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:26.444 request: 00:32:26.444 { 00:32:26.444 "name": "nvme0", 00:32:26.444 "trtype": "tcp", 00:32:26.444 "traddr": "127.0.0.1", 00:32:26.444 "adrfam": "ipv4", 00:32:26.444 "trsvcid": "4420", 00:32:26.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.444 "prchk_reftag": false, 00:32:26.444 "prchk_guard": false, 00:32:26.444 "hdgst": false, 00:32:26.444 "ddgst": false, 00:32:26.444 "psk": ":spdk-test:key1", 00:32:26.444 "method": "bdev_nvme_attach_controller", 00:32:26.444 "req_id": 1 00:32:26.444 } 00:32:26.444 Got JSON-RPC error response 00:32:26.444 response: 00:32:26.444 { 00:32:26.444 "code": -5, 00:32:26.444 "message": "Input/output error" 00:32:26.444 } 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@33 -- # sn=627968630 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 627968630 00:32:26.444 1 links removed 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@33 -- # sn=802647755 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 802647755 00:32:26.444 1 links removed 00:32:26.444 21:08:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1820564 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1820564 ']' 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1820564 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820564 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:26.444 21:08:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:26.445 21:08:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820564' 00:32:26.445 killing process with pid 1820564 00:32:26.445 21:08:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 1820564 00:32:26.445 Received shutdown signal, test time was about 1.000000 seconds 00:32:26.445 00:32:26.445 Latency(us) 00:32:26.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.445 =================================================================================================================== 00:32:26.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.445 21:08:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 1820564 00:32:26.705 21:08:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1820539 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1820539 ']' 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1820539 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820539 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820539' 00:32:26.705 killing process with pid 1820539 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 1820539 00:32:26.705 21:08:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 1820539 00:32:26.964 00:32:26.964 real 0m4.833s 00:32:26.964 user 0m8.275s 00:32:26.964 sys 0m1.254s 00:32:26.964 21:08:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:26.964 21:08:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.964 ************************************ 00:32:26.964 END TEST keyring_linux 00:32:26.964 ************************************ 00:32:26.964 21:08:30 -- common/autotest_common.sh@1142 -- # return 0 00:32:26.964 21:08:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:26.964 21:08:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:26.964 21:08:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:26.964 21:08:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:26.964 21:08:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:26.964 21:08:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:26.964 21:08:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:26.964 21:08:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:26.964 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.964 21:08:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:26.964 21:08:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:26.964 21:08:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:26.964 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:32:35.106 INFO: APP EXITING 00:32:35.106 INFO: killing all VMs 00:32:35.106 INFO: killing vhost app 00:32:35.106 WARN: no vhost pid file found 00:32:35.106 INFO: EXIT DONE 00:32:37.654 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:37.654 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:37.654 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:37.915 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:37.915 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:37.915 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:37.915 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:41.218 Cleaning 00:32:41.218 Removing: /var/run/dpdk/spdk0/config 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:41.218 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:41.218 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:41.218 Removing: /var/run/dpdk/spdk1/config 00:32:41.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:41.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:41.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:41.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:41.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:41.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:41.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:41.479 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:41.479 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:41.479 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:41.479 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:41.479 Removing: /var/run/dpdk/spdk2/config 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:41.479 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:41.479 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:41.479 Removing: /var/run/dpdk/spdk3/config 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:41.479 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:41.479 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:41.479 Removing: /var/run/dpdk/spdk4/config 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:41.479 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:41.479 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:41.479 Removing: /dev/shm/bdev_svc_trace.1 00:32:41.479 Removing: /dev/shm/nvmf_trace.0 00:32:41.479 Removing: /dev/shm/spdk_tgt_trace.pid1363466 00:32:41.479 Removing: /var/run/dpdk/spdk0 00:32:41.479 Removing: /var/run/dpdk/spdk1 00:32:41.479 Removing: /var/run/dpdk/spdk2 00:32:41.479 Removing: /var/run/dpdk/spdk3 00:32:41.479 Removing: /var/run/dpdk/spdk4 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1361927 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1363466 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1364042 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1365265 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1365393 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1366705 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1366761 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1367201 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1368037 00:32:41.479 Removing: /var/run/dpdk/spdk_pid1368788 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1369167 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1369431 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1369720 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1370038 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1370392 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1370740 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1371055 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1372192 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1375584 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1376256 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1376750 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1377081 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1377456 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1377647 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1378162 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1378197 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1378541 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1378871 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1378916 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1379089 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1379674 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1379825 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1380108 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1380474 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1380497 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1380779 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1380968 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1381265 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1381621 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1381968 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1382252 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1382449 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1382709 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1383062 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1383411 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1383739 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1383946 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1384156 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1384500 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1384857 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1385204 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1385407 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1385608 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1385947 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1386294 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1386650 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1386720 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1387128 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1391434 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1444742 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1449791 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1461691 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1468037 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1472873 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1473674 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1481096 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1488793 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1488819 00:32:41.741 Removing: /var/run/dpdk/spdk_pid1489822 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1490828 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1491834 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1492510 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1492515 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1492847 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1492860 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1492880 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1493947 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1494984 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1496079 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1496709 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1496844 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1497094 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1498341 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1499740 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1509754 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1510187 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1515135 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1521945 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1524955 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1537870 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1548384 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1550458 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1551675 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1571864 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1576306 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1608537 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1613693 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1615696 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1617956 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1618053 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1618386 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1618680 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1619259 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1621440 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1622402 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1623001 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1625850 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1626550 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1627266 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1632308 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1644230 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1649047 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1656231 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1657725 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1659419 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1664571 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1669377 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1678495 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1678552 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1683820 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1684067 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1684402 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1684880 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1685043 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1690442 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1691034 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1696395 00:32:42.003 Removing: /var/run/dpdk/spdk_pid1699521 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1706051 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1712425 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1722362 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1731028 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1731030 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1753945 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1754634 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1755325 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1756043 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1757075 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1757838 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1758670 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1759415 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1764480 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1764820 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1771845 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1772202 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1774742 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1782092 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1782167 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1788448 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1790791 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1793145 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1794487 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1796965 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1798219 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1808148 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1808810 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1809476 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1812384 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1812842 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1813440 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1818189 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1818311 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1819989 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1820539 00:32:42.265 Removing: /var/run/dpdk/spdk_pid1820564 00:32:42.265 Clean 00:32:42.265 21:08:46 -- common/autotest_common.sh@1451 -- # return 0 00:32:42.265 21:08:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:42.265 21:08:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:42.265 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:32:42.527 21:08:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:42.527 21:08:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:42.527 21:08:46 -- common/autotest_common.sh@10 -- # set +x 00:32:42.527 21:08:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:42.527 21:08:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:42.527 21:08:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:42.527 21:08:46 -- spdk/autotest.sh@391 -- # hash lcov 00:32:42.527 21:08:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:42.527 21:08:46 -- spdk/autotest.sh@393 -- # hostname 00:32:42.527 21:08:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:42.527 geninfo: WARNING: invalid characters removed from testname! 00:33:09.131 21:09:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:09.699 21:09:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:11.609 21:09:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.994 21:09:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:14.379 21:09:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.290 21:09:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:17.670 21:09:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:17.670 21:09:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.670 21:09:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:17.670 21:09:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.670 21:09:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.670 21:09:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.670 21:09:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.670 21:09:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.670 21:09:21 -- paths/export.sh@5 -- $ export PATH 00:33:17.670 21:09:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.670 21:09:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:17.670 21:09:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:17.670 21:09:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721070561.XXXXXX 00:33:17.670 21:09:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721070561.ouX0Kf 00:33:17.670 21:09:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:17.670 21:09:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:17.670 21:09:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:17.670 21:09:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:17.670 21:09:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:17.670 21:09:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:17.670 21:09:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:17.670 21:09:21 -- common/autotest_common.sh@10 -- $ set +x 00:33:17.670 21:09:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:17.670 21:09:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:17.670 21:09:21 -- pm/common@17 -- $ local monitor 00:33:17.670 21:09:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.670 21:09:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.670 21:09:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.670 21:09:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.670 21:09:21 -- pm/common@21 -- $ date +%s 00:33:17.670 21:09:21 -- pm/common@25 -- $ sleep 1 00:33:17.670 21:09:21 -- pm/common@21 -- $ date +%s 00:33:17.670 21:09:21 -- pm/common@21 -- $ date +%s 00:33:17.670 21:09:21 -- pm/common@21 -- $ date +%s 00:33:17.670 21:09:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721070561 00:33:17.670 21:09:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721070561 00:33:17.670 21:09:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721070561 00:33:17.670 21:09:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721070561 00:33:17.930 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721070561_collect-vmstat.pm.log 00:33:17.930 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721070561_collect-cpu-load.pm.log 00:33:17.930 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721070561_collect-cpu-temp.pm.log 00:33:17.930 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721070561_collect-bmc-pm.bmc.pm.log 00:33:18.871 21:09:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:18.871 21:09:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:18.871 21:09:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:18.871 21:09:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:18.871 21:09:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:18.871 21:09:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:18.871 21:09:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:18.871 21:09:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:18.871 21:09:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:18.871 21:09:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:18.871 21:09:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:18.871 21:09:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:18.871 21:09:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:18.871 21:09:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.871 21:09:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:18.871 21:09:22 -- pm/common@44 -- $ pid=1833594 00:33:18.871 21:09:22 -- pm/common@50 -- $ kill -TERM 1833594 00:33:18.871 21:09:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.871 21:09:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:18.871 21:09:22 -- pm/common@44 -- $ pid=1833595 00:33:18.871 21:09:22 -- pm/common@50 -- $ kill -TERM 1833595 00:33:18.871 21:09:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.871 21:09:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:18.871 21:09:22 -- pm/common@44 -- $ pid=1833597 00:33:18.871 21:09:22 -- pm/common@50 -- $ kill -TERM 1833597 00:33:18.871 21:09:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.871 21:09:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:18.871 21:09:22 -- pm/common@44 -- $ pid=1833620 00:33:18.871 21:09:22 -- pm/common@50 -- $ sudo -E kill -TERM 1833620 00:33:18.871 + [[ -n 1242058 ]] 00:33:18.871 + sudo kill 1242058 00:33:18.881 [Pipeline] } 00:33:18.896 [Pipeline] // stage 00:33:18.901 [Pipeline] } 00:33:18.915 [Pipeline] // timeout 00:33:18.921 [Pipeline] } 00:33:18.934 [Pipeline] // catchError 00:33:18.939 [Pipeline] } 00:33:18.956 [Pipeline] // wrap 00:33:18.961 [Pipeline] } 00:33:18.972 [Pipeline] // catchError 00:33:18.979 [Pipeline] stage 00:33:18.982 [Pipeline] { (Epilogue) 00:33:18.995 [Pipeline] catchError 00:33:18.996 [Pipeline] { 00:33:19.011 [Pipeline] echo 00:33:19.013 Cleanup processes 00:33:19.020 [Pipeline] sh 00:33:19.313 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.313 1833700 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:19.313 1834141 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.327 [Pipeline] sh 00:33:19.612 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.612 ++ grep -v 'sudo pgrep' 00:33:19.612 ++ awk '{print $1}' 00:33:19.612 + sudo kill -9 1833700 00:33:19.623 [Pipeline] sh 00:33:19.906 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:32.155 [Pipeline] sh 00:33:32.436 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:32.436 Artifacts sizes are good 00:33:32.449 [Pipeline] archiveArtifacts 00:33:32.456 Archiving artifacts 00:33:32.641 [Pipeline] sh 00:33:32.929 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:32.944 [Pipeline] cleanWs 00:33:32.955 [WS-CLEANUP] Deleting project workspace... 00:33:32.955 [WS-CLEANUP] Deferred wipeout is used... 00:33:32.962 [WS-CLEANUP] done 00:33:32.963 [Pipeline] } 00:33:32.982 [Pipeline] // catchError 00:33:32.994 [Pipeline] sh 00:33:33.304 + logger -p user.info -t JENKINS-CI 00:33:33.315 [Pipeline] } 00:33:33.333 [Pipeline] // stage 00:33:33.338 [Pipeline] } 00:33:33.355 [Pipeline] // node 00:33:33.360 [Pipeline] End of Pipeline 00:33:33.393 Finished: SUCCESS